00:00:00.001 Started by upstream project "autotest-per-patch" build number 132360 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.060 Fetching changes from the remote Git repository 00:00:00.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.095 Using shallow fetch with depth 1 00:00:00.095 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.095 > git --version # timeout=10 00:00:00.127 > git --version # 'git version 2.39.2' 00:00:00.127 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.151 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.151 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.714 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.725 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.737 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.737 > git config core.sparsecheckout # timeout=10 00:00:04.750 > git read-tree -mu HEAD # timeout=10 00:00:04.768 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.794 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.795 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.917 [Pipeline] Start of Pipeline 00:00:04.933 [Pipeline] library 00:00:04.935 Loading library shm_lib@master 00:00:04.935 Library shm_lib@master is cached. Copying from home. 00:00:04.956 [Pipeline] node 00:00:04.966 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.968 [Pipeline] { 00:00:04.979 [Pipeline] catchError 00:00:04.982 [Pipeline] { 00:00:04.998 [Pipeline] wrap 00:00:05.008 [Pipeline] { 00:00:05.019 [Pipeline] stage 00:00:05.022 [Pipeline] { (Prologue) 00:00:05.213 [Pipeline] sh 00:00:05.499 + logger -p user.info -t JENKINS-CI 00:00:05.520 [Pipeline] echo 00:00:05.521 Node: GP6 00:00:05.530 [Pipeline] sh 00:00:05.843 [Pipeline] setCustomBuildProperty 00:00:05.856 [Pipeline] echo 00:00:05.857 Cleanup processes 00:00:05.863 [Pipeline] sh 00:00:06.153 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.153 3126265 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.167 [Pipeline] sh 00:00:06.450 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.450 ++ grep -v 'sudo pgrep' 00:00:06.450 ++ awk '{print $1}' 00:00:06.450 + sudo kill -9 00:00:06.450 + true 00:00:06.466 [Pipeline] cleanWs 00:00:06.476 [WS-CLEANUP] Deleting project workspace... 00:00:06.476 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.482 [WS-CLEANUP] done 00:00:06.486 [Pipeline] setCustomBuildProperty 00:00:06.500 [Pipeline] sh 00:00:06.787 + sudo git config --global --replace-all safe.directory '*' 00:00:06.869 [Pipeline] httpRequest 00:00:07.544 [Pipeline] echo 00:00:07.546 Sorcerer 10.211.164.20 is alive 00:00:07.556 [Pipeline] retry 00:00:07.558 [Pipeline] { 00:00:07.572 [Pipeline] httpRequest 00:00:07.576 HttpMethod: GET 00:00:07.577 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.577 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.603 Response Code: HTTP/1.1 200 OK 00:00:07.603 Success: Status code 200 is in the accepted range: 200,404 00:00:07.603 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.369 [Pipeline] } 00:00:33.392 [Pipeline] // retry 00:00:33.401 [Pipeline] sh 00:00:33.692 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.710 [Pipeline] httpRequest 00:00:34.095 [Pipeline] echo 00:00:34.097 Sorcerer 10.211.164.20 is alive 00:00:34.109 [Pipeline] retry 00:00:34.111 [Pipeline] { 00:00:34.126 [Pipeline] httpRequest 00:00:34.130 HttpMethod: GET 00:00:34.131 URL: http://10.211.164.20/packages/spdk_7a5718d026e82b588108d8e87d8d0ff9b2b7baf6.tar.gz 00:00:34.131 Sending request to url: http://10.211.164.20/packages/spdk_7a5718d026e82b588108d8e87d8d0ff9b2b7baf6.tar.gz 00:00:34.142 Response Code: HTTP/1.1 200 OK 00:00:34.142 Success: Status code 200 is in the accepted range: 200,404 00:00:34.142 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_7a5718d026e82b588108d8e87d8d0ff9b2b7baf6.tar.gz 00:01:42.328 [Pipeline] } 00:01:42.345 [Pipeline] // retry 00:01:42.350 [Pipeline] sh 00:01:42.631 + tar --no-same-owner -xf spdk_7a5718d026e82b588108d8e87d8d0ff9b2b7baf6.tar.gz 00:01:45.930 [Pipeline] sh 00:01:46.218 + git -C spdk log --oneline -n5 00:01:46.218 7a5718d02 bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:01:46.218 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:01:46.218 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:01:46.218 a38267915 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:01:46.218 095307e93 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:46.230 [Pipeline] } 00:01:46.244 [Pipeline] // stage 00:01:46.252 [Pipeline] stage 00:01:46.254 [Pipeline] { (Prepare) 00:01:46.269 [Pipeline] writeFile 00:01:46.284 [Pipeline] sh 00:01:46.569 + logger -p user.info -t JENKINS-CI 00:01:46.601 [Pipeline] sh 00:01:46.901 + logger -p user.info -t JENKINS-CI 00:01:46.913 [Pipeline] sh 00:01:47.199 + cat autorun-spdk.conf 00:01:47.199 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.199 SPDK_TEST_NVMF=1 00:01:47.199 SPDK_TEST_NVME_CLI=1 00:01:47.199 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.199 SPDK_TEST_NVMF_NICS=e810 00:01:47.199 SPDK_TEST_VFIOUSER=1 00:01:47.199 SPDK_RUN_UBSAN=1 00:01:47.199 NET_TYPE=phy 00:01:47.208 RUN_NIGHTLY=0 00:01:47.212 [Pipeline] readFile 00:01:47.235 [Pipeline] withEnv 00:01:47.237 [Pipeline] { 00:01:47.248 [Pipeline] sh 00:01:47.537 + set -ex 00:01:47.537 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:47.537 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.537 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.537 ++ SPDK_TEST_NVMF=1 00:01:47.537 ++ SPDK_TEST_NVME_CLI=1 00:01:47.537 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.537 ++ SPDK_TEST_NVMF_NICS=e810 00:01:47.537 ++ SPDK_TEST_VFIOUSER=1 00:01:47.537 ++ SPDK_RUN_UBSAN=1 00:01:47.537 ++ NET_TYPE=phy 00:01:47.537 ++ RUN_NIGHTLY=0 00:01:47.537 + case $SPDK_TEST_NVMF_NICS in 00:01:47.537 + DRIVERS=ice 00:01:47.537 + [[ tcp == \r\d\m\a ]] 00:01:47.537 + [[ -n ice ]] 00:01:47.537 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:47.537 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:47.537 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:47.537 rmmod: ERROR: Module irdma is not currently loaded 00:01:47.537 rmmod: ERROR: Module i40iw is not currently loaded 00:01:47.537 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:47.537 + true 00:01:47.537 + for D in $DRIVERS 00:01:47.537 + sudo modprobe ice 00:01:47.537 + exit 0 00:01:47.547 [Pipeline] } 00:01:47.560 [Pipeline] // withEnv 00:01:47.565 [Pipeline] } 00:01:47.577 [Pipeline] // stage 00:01:47.586 [Pipeline] catchError 00:01:47.588 [Pipeline] { 00:01:47.603 [Pipeline] timeout 00:01:47.603 Timeout set to expire in 1 hr 0 min 00:01:47.605 [Pipeline] { 00:01:47.618 [Pipeline] stage 00:01:47.620 [Pipeline] { (Tests) 00:01:47.634 [Pipeline] sh 00:01:47.922 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:47.922 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:47.922 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:47.922 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:47.922 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.922 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:47.922 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:47.922 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:47.922 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:47.922 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:47.922 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:47.922 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:47.922 + source /etc/os-release 00:01:47.922 ++ NAME='Fedora Linux' 00:01:47.922 ++ VERSION='39 (Cloud Edition)' 00:01:47.922 ++ ID=fedora 00:01:47.922 ++ VERSION_ID=39 00:01:47.922 ++ VERSION_CODENAME= 00:01:47.922 ++ PLATFORM_ID=platform:f39 00:01:47.922 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:47.922 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:47.922 ++ LOGO=fedora-logo-icon 00:01:47.922 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:47.922 ++ HOME_URL=https://fedoraproject.org/ 00:01:47.922 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:47.922 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:47.922 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:47.922 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:47.922 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:47.922 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:47.922 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:47.922 ++ SUPPORT_END=2024-11-12 00:01:47.922 ++ VARIANT='Cloud Edition' 00:01:47.922 ++ VARIANT_ID=cloud 00:01:47.922 + uname -a 00:01:47.922 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:47.922 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:49.298 Hugepages 00:01:49.298 node hugesize free / total 00:01:49.298 node0 1048576kB 0 / 0 00:01:49.298 node0 2048kB 0 / 0 00:01:49.298 node1 1048576kB 0 / 0 00:01:49.298 node1 2048kB 0 / 0 00:01:49.298 00:01:49.298 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:49.298 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:49.298 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:49.298 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:49.298 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:49.298 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:49.298 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:49.298 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:49.298 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:49.298 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:49.298 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:49.298 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:49.298 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:49.298 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:49.298 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:49.298 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:49.298 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:49.298 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:49.298 + rm -f /tmp/spdk-ld-path 00:01:49.298 + source autorun-spdk.conf 00:01:49.298 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.298 ++ SPDK_TEST_NVMF=1 00:01:49.298 ++ SPDK_TEST_NVME_CLI=1 00:01:49.298 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.298 ++ SPDK_TEST_NVMF_NICS=e810 00:01:49.298 ++ SPDK_TEST_VFIOUSER=1 00:01:49.298 ++ SPDK_RUN_UBSAN=1 00:01:49.298 ++ NET_TYPE=phy 00:01:49.298 ++ RUN_NIGHTLY=0 00:01:49.298 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:49.298 + [[ -n '' ]] 00:01:49.298 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.298 + for M in /var/spdk/build-*-manifest.txt 00:01:49.298 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:49.298 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.298 + for M in /var/spdk/build-*-manifest.txt 00:01:49.298 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:49.298 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.298 + for M in /var/spdk/build-*-manifest.txt 00:01:49.298 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:49.298 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.298 ++ uname 00:01:49.298 + [[ Linux == \L\i\n\u\x ]] 00:01:49.298 + sudo dmesg -T 00:01:49.298 + sudo dmesg --clear 00:01:49.298 + dmesg_pid=3127571 00:01:49.298 + [[ Fedora Linux == FreeBSD ]] 00:01:49.298 + sudo dmesg -Tw 00:01:49.298 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.298 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.298 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.298 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:49.298 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:49.298 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.298 + export FIO_BIN=/usr/src/fio-static/fio 00:01:49.298 + FIO_BIN=/usr/src/fio-static/fio 00:01:49.298 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.298 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.298 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.298 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.298 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.299 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.299 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.299 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.299 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.299 08:52:26 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:49.299 08:52:26 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.299 08:52:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.299 08:52:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:49.299 08:52:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:49.299 08:52:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.299 08:52:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:49.299 08:52:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:49.299 08:52:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:49.299 08:52:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:49.299 08:52:26 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:49.299 08:52:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:49.299 08:52:26 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.299 08:52:26 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:49.299 08:52:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:49.299 08:52:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:49.299 08:52:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.299 08:52:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.299 08:52:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.299 08:52:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.299 08:52:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.299 08:52:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.299 08:52:26 -- paths/export.sh@5 -- $ export PATH 00:01:49.299 08:52:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.299 08:52:26 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:49.299 08:52:26 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:49.299 08:52:26 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732089146.XXXXXX 00:01:49.299 08:52:26 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732089146.ahAlvc 00:01:49.299 08:52:26 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:49.299 08:52:26 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:49.299 08:52:26 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:49.299 08:52:26 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:49.299 08:52:26 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.299 08:52:26 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:49.299 08:52:26 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:49.299 08:52:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.299 08:52:26 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:49.299 08:52:26 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:49.299 08:52:26 -- pm/common@17 -- $ local monitor 00:01:49.299 08:52:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.299 08:52:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.299 08:52:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.299 08:52:26 -- pm/common@21 -- $ date +%s 00:01:49.299 08:52:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.299 08:52:26 -- pm/common@21 -- $ date +%s 00:01:49.299 08:52:26 -- pm/common@25 -- $ sleep 1 00:01:49.299 08:52:26 -- pm/common@21 -- $ date +%s 00:01:49.299 08:52:26 -- pm/common@21 -- $ date +%s 00:01:49.299 08:52:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732089146 00:01:49.299 08:52:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732089146 00:01:49.299 08:52:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732089146 00:01:49.299 08:52:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732089146 00:01:49.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732089146_collect-cpu-load.pm.log 00:01:49.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732089146_collect-vmstat.pm.log 00:01:49.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732089146_collect-cpu-temp.pm.log 00:01:49.299 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732089146_collect-bmc-pm.bmc.pm.log 00:01:50.238 08:52:27 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:50.238 08:52:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.238 08:52:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.238 08:52:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.238 08:52:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.238 Wed Nov 20 07:52:27 AM UTC 2024 00:01:50.238 08:52:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.496 v25.01-pre-191-g7a5718d02 00:01:50.496 08:52:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:50.496 08:52:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.496 08:52:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.496 08:52:27 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:50.496 08:52:27 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:50.496 08:52:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.496 ************************************ 00:01:50.496 START TEST ubsan 00:01:50.496 ************************************ 00:01:50.496 08:52:27 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:50.496 using ubsan 00:01:50.496 00:01:50.496 real 0m0.000s 00:01:50.496 user 0m0.000s 00:01:50.496 sys 0m0.000s 00:01:50.496 08:52:27 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:50.496 08:52:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.496 ************************************ 00:01:50.496 END TEST ubsan 00:01:50.496 ************************************ 00:01:50.496 08:52:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:50.496 08:52:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:50.496 08:52:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:50.496 08:52:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:50.496 08:52:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:50.496 08:52:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:50.496 08:52:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:50.496 08:52:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:50.496 08:52:27 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:50.496 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:50.497 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:50.755 Using 'verbs' RDMA provider 00:02:01.318 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:11.310 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:11.310 Creating mk/config.mk...done. 00:02:11.310 Creating mk/cc.flags.mk...done. 00:02:11.310 Type 'make' to build. 00:02:11.310 08:52:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:02:11.310 08:52:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:11.310 08:52:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:11.310 08:52:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.310 ************************************ 00:02:11.310 START TEST make 00:02:11.310 ************************************ 00:02:11.310 08:52:48 make -- common/autotest_common.sh@1127 -- $ make -j48 00:02:11.569 make[1]: Nothing to be done for 'all'. 00:02:13.498 The Meson build system 00:02:13.498 Version: 1.5.0 00:02:13.498 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:13.498 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:13.498 Build type: native build 00:02:13.498 Project name: libvfio-user 00:02:13.498 Project version: 0.0.1 00:02:13.498 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.498 C linker for the host machine: cc ld.bfd 2.40-14 00:02:13.498 Host machine cpu family: x86_64 00:02:13.498 Host machine cpu: x86_64 00:02:13.498 Run-time dependency threads found: YES 00:02:13.498 Library dl found: YES 00:02:13.498 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.498 Run-time dependency json-c found: YES 0.17 00:02:13.498 Run-time dependency cmocka found: YES 1.1.7 00:02:13.498 Program pytest-3 found: NO 00:02:13.498 Program flake8 found: NO 00:02:13.498 Program misspell-fixer found: NO 00:02:13.498 Program restructuredtext-lint found: NO 00:02:13.498 Program valgrind found: YES (/usr/bin/valgrind) 00:02:13.498 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.498 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.498 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.498 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:13.498 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:13.498 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:13.498 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:13.498 Build targets in project: 8 00:02:13.498 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:13.498 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:13.498 00:02:13.498 libvfio-user 0.0.1 00:02:13.498 00:02:13.498 User defined options 00:02:13.498 buildtype : debug 00:02:13.498 default_library: shared 00:02:13.498 libdir : /usr/local/lib 00:02:13.498 00:02:13.498 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.072 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:14.336 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:14.336 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:14.336 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:14.336 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:14.336 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:14.336 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:14.336 [7/37] Compiling C object samples/null.p/null.c.o 00:02:14.336 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:14.336 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:14.596 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:14.596 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:14.597 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:14.597 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:14.597 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:14.597 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:14.597 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:14.597 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:14.597 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:14.597 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:14.597 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:14.597 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:14.597 [22/37] Compiling C object samples/server.p/server.c.o 00:02:14.597 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:14.597 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:14.597 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:14.597 [26/37] Compiling C object samples/client.p/client.c.o 00:02:14.597 [27/37] Linking target samples/client 00:02:14.858 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:14.858 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:14.858 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:14.858 [31/37] Linking target test/unit_tests 00:02:15.118 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:15.118 [33/37] Linking target samples/server 00:02:15.118 [34/37] Linking target samples/gpio-pci-idio-16 00:02:15.118 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:15.118 [36/37] Linking target samples/lspci 00:02:15.118 [37/37] Linking target samples/null 00:02:15.118 INFO: autodetecting backend as ninja 00:02:15.118 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:15.118 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.066 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:16.066 ninja: no work to do. 00:02:21.333 The Meson build system 00:02:21.333 Version: 1.5.0 00:02:21.333 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:21.333 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:21.333 Build type: native build 00:02:21.333 Program cat found: YES (/usr/bin/cat) 00:02:21.333 Project name: DPDK 00:02:21.333 Project version: 24.03.0 00:02:21.333 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.333 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.333 Host machine cpu family: x86_64 00:02:21.333 Host machine cpu: x86_64 00:02:21.333 Message: ## Building in Developer Mode ## 00:02:21.333 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.333 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.333 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.333 Program python3 found: YES (/usr/bin/python3) 00:02:21.333 Program cat found: YES (/usr/bin/cat) 00:02:21.333 Compiler for C supports arguments -march=native: YES 00:02:21.333 Checking for size of "void *" : 8 00:02:21.333 Checking for size of "void *" : 8 (cached) 00:02:21.333 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:21.333 Library m found: YES 00:02:21.333 Library numa found: YES 00:02:21.333 Has header "numaif.h" : YES 00:02:21.333 Library fdt found: NO 00:02:21.333 Library execinfo found: NO 00:02:21.333 Has header "execinfo.h" : YES 00:02:21.333 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.333 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.333 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.333 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.333 Run-time dependency openssl found: YES 3.1.1 00:02:21.333 Run-time dependency libpcap found: YES 1.10.4 00:02:21.333 Has header "pcap.h" with dependency libpcap: YES 00:02:21.333 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.333 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.333 Compiler for C supports arguments -Wformat: YES 00:02:21.333 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.333 Compiler for C supports arguments -Wformat-security: NO 00:02:21.333 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.333 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.333 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.333 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.333 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.333 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.333 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.333 Compiler for C supports arguments -Wundef: YES 00:02:21.333 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.333 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.333 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.333 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.333 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.333 Program objdump found: YES (/usr/bin/objdump) 00:02:21.333 Compiler for C supports arguments -mavx512f: YES 00:02:21.333 Checking if "AVX512 checking" compiles: YES 00:02:21.333 Fetching value of define "__SSE4_2__" : 1 00:02:21.333 Fetching value of define "__AES__" : 1 00:02:21.333 Fetching value of define "__AVX__" : 1 00:02:21.333 Fetching value of define "__AVX2__" : (undefined) 00:02:21.334 Fetching value of define "__AVX512BW__" : (undefined) 00:02:21.334 Fetching value of define "__AVX512CD__" : (undefined) 00:02:21.334 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:21.334 Fetching value of define "__AVX512F__" : (undefined) 00:02:21.334 Fetching value of define "__AVX512VL__" : (undefined) 00:02:21.334 Fetching value of define "__PCLMUL__" : 1 00:02:21.334 Fetching value of define "__RDRND__" : 1 00:02:21.334 Fetching value of define "__RDSEED__" : (undefined) 00:02:21.334 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:21.334 Fetching value of define "__znver1__" : (undefined) 00:02:21.334 Fetching value of define "__znver2__" : (undefined) 00:02:21.334 Fetching value of define "__znver3__" : (undefined) 00:02:21.334 Fetching value of define "__znver4__" : (undefined) 00:02:21.334 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.334 Message: lib/log: Defining dependency "log" 00:02:21.334 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.334 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.334 Checking for function "getentropy" : NO 00:02:21.334 Message: lib/eal: Defining dependency "eal" 00:02:21.334 Message: lib/ring: Defining dependency "ring" 00:02:21.334 Message: lib/rcu: Defining dependency "rcu" 00:02:21.334 Message: lib/mempool: Defining dependency "mempool" 00:02:21.334 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.334 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.334 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.334 Compiler for C supports arguments -mpclmul: YES 00:02:21.334 Compiler for C supports arguments -maes: YES 00:02:21.334 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.334 Compiler for C supports arguments -mavx512bw: YES 00:02:21.334 Compiler for C supports arguments -mavx512dq: YES 00:02:21.334 Compiler for C supports arguments -mavx512vl: YES 00:02:21.334 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.334 Compiler for C supports arguments -mavx2: YES 00:02:21.334 Compiler for C supports arguments -mavx: YES 00:02:21.334 Message: lib/net: Defining dependency "net" 00:02:21.334 Message: lib/meter: Defining dependency "meter" 00:02:21.334 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.334 Message: lib/pci: Defining dependency "pci" 00:02:21.334 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.334 Message: lib/hash: Defining dependency "hash" 00:02:21.334 Message: lib/timer: Defining dependency "timer" 00:02:21.334 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.334 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.334 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.334 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.334 Message: lib/power: Defining dependency "power" 00:02:21.334 Message: lib/reorder: Defining dependency "reorder" 00:02:21.334 Message: lib/security: Defining dependency "security" 00:02:21.334 Has header "linux/userfaultfd.h" : YES 00:02:21.334 Has header "linux/vduse.h" : YES 00:02:21.334 Message: lib/vhost: Defining dependency "vhost" 00:02:21.334 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.334 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.334 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.334 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.334 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.334 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.334 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.334 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.334 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.334 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.334 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.334 Configuring doxy-api-html.conf using configuration 00:02:21.334 Configuring doxy-api-man.conf using configuration 00:02:21.334 Program mandb found: YES (/usr/bin/mandb) 00:02:21.334 Program sphinx-build found: NO 00:02:21.334 Configuring rte_build_config.h using configuration 00:02:21.334 Message: 00:02:21.334 ================= 00:02:21.334 Applications Enabled 00:02:21.334 ================= 00:02:21.334 00:02:21.334 apps: 00:02:21.334 00:02:21.334 00:02:21.334 Message: 00:02:21.334 ================= 00:02:21.334 Libraries Enabled 00:02:21.334 ================= 00:02:21.334 00:02:21.334 libs: 00:02:21.334 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.334 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.334 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.334 00:02:21.334 Message: 00:02:21.334 =============== 00:02:21.334 Drivers Enabled 00:02:21.334 =============== 00:02:21.334 00:02:21.334 common: 00:02:21.334 00:02:21.334 bus: 00:02:21.334 pci, vdev, 00:02:21.334 mempool: 00:02:21.334 ring, 00:02:21.334 dma: 00:02:21.334 00:02:21.334 net: 00:02:21.334 00:02:21.334 crypto: 00:02:21.334 00:02:21.334 compress: 00:02:21.334 00:02:21.334 vdpa: 00:02:21.334 00:02:21.334 00:02:21.334 Message: 00:02:21.334 ================= 00:02:21.334 Content Skipped 00:02:21.334 ================= 00:02:21.334 00:02:21.334 apps: 00:02:21.334 dumpcap: explicitly disabled via build config 00:02:21.334 graph: explicitly disabled via build config 00:02:21.334 pdump: explicitly disabled via build config 00:02:21.334 proc-info: explicitly disabled via build config 00:02:21.334 test-acl: explicitly disabled via build config 00:02:21.334 test-bbdev: explicitly disabled via build config 00:02:21.334 test-cmdline: explicitly disabled via build config 00:02:21.334 test-compress-perf: explicitly disabled via build config 00:02:21.334 test-crypto-perf: explicitly disabled via build config 00:02:21.334 test-dma-perf: explicitly disabled via build config 00:02:21.334 test-eventdev: explicitly disabled via build config 00:02:21.334 test-fib: explicitly disabled via build config 00:02:21.334 test-flow-perf: explicitly disabled via build config 00:02:21.334 test-gpudev: explicitly disabled via build config 00:02:21.334 test-mldev: explicitly disabled via build config 00:02:21.334 test-pipeline: explicitly disabled via build config 00:02:21.334 test-pmd: explicitly disabled via build config 00:02:21.334 test-regex: explicitly disabled via build config 00:02:21.334 test-sad: explicitly disabled via build config 00:02:21.334 test-security-perf: explicitly disabled via build config 00:02:21.334 00:02:21.334 libs: 00:02:21.334 argparse: explicitly disabled via build config 00:02:21.334 metrics: explicitly disabled via build config 00:02:21.334 acl: explicitly disabled via build config 00:02:21.334 bbdev: explicitly disabled via build config 00:02:21.334 bitratestats: explicitly disabled via build config 00:02:21.334 bpf: explicitly disabled via build config 00:02:21.334 cfgfile: explicitly disabled via build config 00:02:21.334 distributor: explicitly disabled via build config 00:02:21.334 efd: explicitly disabled via build config 00:02:21.334 eventdev: explicitly disabled via build config 00:02:21.334 dispatcher: explicitly disabled via build config 00:02:21.334 gpudev: explicitly disabled via build config 00:02:21.334 gro: explicitly disabled via build config 00:02:21.334 gso: explicitly disabled via build config 00:02:21.334 ip_frag: explicitly disabled via build config 00:02:21.334 jobstats: explicitly disabled via build config 00:02:21.334 latencystats: explicitly disabled via build config 00:02:21.334 lpm: explicitly disabled via build config 00:02:21.334 member: explicitly disabled via build config 00:02:21.334 pcapng: explicitly disabled via build config 00:02:21.334 rawdev: explicitly disabled via build config 00:02:21.334 regexdev: explicitly disabled via build config 00:02:21.334 mldev: explicitly disabled via build config 00:02:21.334 rib: explicitly disabled via build config 00:02:21.334 sched: explicitly disabled via build config 00:02:21.334 stack: explicitly disabled via build config 00:02:21.334 ipsec: explicitly disabled via build config 00:02:21.334 pdcp: explicitly disabled via build config 00:02:21.334 fib: explicitly disabled via build config 00:02:21.334 port: explicitly disabled via build config 00:02:21.334 pdump: explicitly disabled via build config 00:02:21.334 table: explicitly disabled via build config 00:02:21.334 pipeline: explicitly disabled via build config 00:02:21.334 graph: explicitly disabled via build config 00:02:21.334 node: explicitly disabled via build config 00:02:21.334 00:02:21.334 drivers: 00:02:21.334 common/cpt: not in enabled drivers build config 00:02:21.334 common/dpaax: not in enabled drivers build config 00:02:21.334 common/iavf: not in enabled drivers build config 00:02:21.334 common/idpf: not in enabled drivers build config 00:02:21.334 common/ionic: not in enabled drivers build config 00:02:21.334 common/mvep: not in enabled drivers build config 00:02:21.334 common/octeontx: not in enabled drivers build config 00:02:21.334 bus/auxiliary: not in enabled drivers build config 00:02:21.334 bus/cdx: not in enabled drivers build config 00:02:21.334 bus/dpaa: not in enabled drivers build config 00:02:21.334 bus/fslmc: not in enabled drivers build config 00:02:21.334 bus/ifpga: not in enabled drivers build config 00:02:21.334 bus/platform: not in enabled drivers build config 00:02:21.334 bus/uacce: not in enabled drivers build config 00:02:21.334 bus/vmbus: not in enabled drivers build config 00:02:21.334 common/cnxk: not in enabled drivers build config 00:02:21.334 common/mlx5: not in enabled drivers build config 00:02:21.334 common/nfp: not in enabled drivers build config 00:02:21.334 common/nitrox: not in enabled drivers build config 00:02:21.334 common/qat: not in enabled drivers build config 00:02:21.334 common/sfc_efx: not in enabled drivers build config 00:02:21.334 mempool/bucket: not in enabled drivers build config 00:02:21.334 mempool/cnxk: not in enabled drivers build config 00:02:21.334 mempool/dpaa: not in enabled drivers build config 00:02:21.334 mempool/dpaa2: not in enabled drivers build config 00:02:21.334 mempool/octeontx: not in enabled drivers build config 00:02:21.334 mempool/stack: not in enabled drivers build config 00:02:21.334 dma/cnxk: not in enabled drivers build config 00:02:21.334 dma/dpaa: not in enabled drivers build config 00:02:21.335 dma/dpaa2: not in enabled drivers build config 00:02:21.335 dma/hisilicon: not in enabled drivers build config 00:02:21.335 dma/idxd: not in enabled drivers build config 00:02:21.335 dma/ioat: not in enabled drivers build config 00:02:21.335 dma/skeleton: not in enabled drivers build config 00:02:21.335 net/af_packet: not in enabled drivers build config 00:02:21.335 net/af_xdp: not in enabled drivers build config 00:02:21.335 net/ark: not in enabled drivers build config 00:02:21.335 net/atlantic: not in enabled drivers build config 00:02:21.335 net/avp: not in enabled drivers build config 00:02:21.335 net/axgbe: not in enabled drivers build config 00:02:21.335 net/bnx2x: not in enabled drivers build config 00:02:21.335 net/bnxt: not in enabled drivers build config 00:02:21.335 net/bonding: not in enabled drivers build config 00:02:21.335 net/cnxk: not in enabled drivers build config 00:02:21.335 net/cpfl: not in enabled drivers build config 00:02:21.335 net/cxgbe: not in enabled drivers build config 00:02:21.335 net/dpaa: not in enabled drivers build config 00:02:21.335 net/dpaa2: not in enabled drivers build config 00:02:21.335 net/e1000: not in enabled drivers build config 00:02:21.335 net/ena: not in enabled drivers build config 00:02:21.335 net/enetc: not in enabled drivers build config 00:02:21.335 net/enetfec: not in enabled drivers build config 00:02:21.335 net/enic: not in enabled drivers build config 00:02:21.335 net/failsafe: not in enabled drivers build config 00:02:21.335 net/fm10k: not in enabled drivers build config 00:02:21.335 net/gve: not in enabled drivers build config 00:02:21.335 net/hinic: not in enabled drivers build config 00:02:21.335 net/hns3: not in enabled drivers build config 00:02:21.335 net/i40e: not in enabled drivers build config 00:02:21.335 net/iavf: not in enabled drivers build config 00:02:21.335 net/ice: not in enabled drivers build config 00:02:21.335 net/idpf: not in enabled drivers build config 00:02:21.335 net/igc: not in enabled drivers build config 00:02:21.335 net/ionic: not in enabled drivers build config 00:02:21.335 net/ipn3ke: not in enabled drivers build config 00:02:21.335 net/ixgbe: not in enabled drivers build config 00:02:21.335 net/mana: not in enabled drivers build config 00:02:21.335 net/memif: not in enabled drivers build config 00:02:21.335 net/mlx4: not in enabled drivers build config 00:02:21.335 net/mlx5: not in enabled drivers build config 00:02:21.335 net/mvneta: not in enabled drivers build config 00:02:21.335 net/mvpp2: not in enabled drivers build config 00:02:21.335 net/netvsc: not in enabled drivers build config 00:02:21.335 net/nfb: not in enabled drivers build config 00:02:21.335 net/nfp: not in enabled drivers build config 00:02:21.335 net/ngbe: not in enabled drivers build config 00:02:21.335 net/null: not in enabled drivers build config 00:02:21.335 net/octeontx: not in enabled drivers build config 00:02:21.335 net/octeon_ep: not in enabled drivers build config 00:02:21.335 net/pcap: not in enabled drivers build config 00:02:21.335 net/pfe: not in enabled drivers build config 00:02:21.335 net/qede: not in enabled drivers build config 00:02:21.335 net/ring: not in enabled drivers build config 00:02:21.335 net/sfc: not in enabled drivers build config 00:02:21.335 net/softnic: not in enabled drivers build config 00:02:21.335 net/tap: not in enabled drivers build config 00:02:21.335 net/thunderx: not in enabled drivers build config 00:02:21.335 net/txgbe: not in enabled drivers build config 00:02:21.335 net/vdev_netvsc: not in enabled drivers build config 00:02:21.335 net/vhost: not in enabled drivers build config 00:02:21.335 net/virtio: not in enabled drivers build config 00:02:21.335 net/vmxnet3: not in enabled drivers build config 00:02:21.335 raw/*: missing internal dependency, "rawdev" 00:02:21.335 crypto/armv8: not in enabled drivers build config 00:02:21.335 crypto/bcmfs: not in enabled drivers build config 00:02:21.335 crypto/caam_jr: not in enabled drivers build config 00:02:21.335 crypto/ccp: not in enabled drivers build config 00:02:21.335 crypto/cnxk: not in enabled drivers build config 00:02:21.335 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.335 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.335 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.335 crypto/mlx5: not in enabled drivers build config 00:02:21.335 crypto/mvsam: not in enabled drivers build config 00:02:21.335 crypto/nitrox: not in enabled drivers build config 00:02:21.335 crypto/null: not in enabled drivers build config 00:02:21.335 crypto/octeontx: not in enabled drivers build config 00:02:21.335 crypto/openssl: not in enabled drivers build config 00:02:21.335 crypto/scheduler: not in enabled drivers build config 00:02:21.335 crypto/uadk: not in enabled drivers build config 00:02:21.335 crypto/virtio: not in enabled drivers build config 00:02:21.335 compress/isal: not in enabled drivers build config 00:02:21.335 compress/mlx5: not in enabled drivers build config 00:02:21.335 compress/nitrox: not in enabled drivers build config 00:02:21.335 compress/octeontx: not in enabled drivers build config 00:02:21.335 compress/zlib: not in enabled drivers build config 00:02:21.335 regex/*: missing internal dependency, "regexdev" 00:02:21.335 ml/*: missing internal dependency, "mldev" 00:02:21.335 vdpa/ifc: not in enabled drivers build config 00:02:21.335 vdpa/mlx5: not in enabled drivers build config 00:02:21.335 vdpa/nfp: not in enabled drivers build config 00:02:21.335 vdpa/sfc: not in enabled drivers build config 00:02:21.335 event/*: missing internal dependency, "eventdev" 00:02:21.335 baseband/*: missing internal dependency, "bbdev" 00:02:21.335 gpu/*: missing internal dependency, "gpudev" 00:02:21.335 00:02:21.335 00:02:21.335 Build targets in project: 85 00:02:21.335 00:02:21.335 DPDK 24.03.0 00:02:21.335 00:02:21.335 User defined options 00:02:21.335 buildtype : debug 00:02:21.335 default_library : shared 00:02:21.335 libdir : lib 00:02:21.335 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:21.335 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:21.335 c_link_args : 00:02:21.335 cpu_instruction_set: native 00:02:21.335 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:21.335 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:21.335 enable_docs : false 00:02:21.335 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:21.335 enable_kmods : false 00:02:21.335 max_lcores : 128 00:02:21.335 tests : false 00:02:21.335 00:02:21.335 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.604 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:21.604 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:21.604 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.604 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.604 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.604 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.604 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.604 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.604 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.604 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:21.604 [10/268] Linking static target lib/librte_kvargs.a 00:02:21.604 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:21.604 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.604 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.604 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.604 [15/268] Linking static target lib/librte_log.a 00:02:21.865 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:22.439 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.439 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.439 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.439 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.439 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.439 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.439 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:22.439 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.439 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:22.439 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.439 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:22.439 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.439 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:22.439 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:22.439 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:22.439 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:22.439 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:22.439 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.439 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:22.439 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.439 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.697 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:22.697 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.697 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:22.697 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.697 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.697 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.697 [44/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.697 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:22.697 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:22.697 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:22.697 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:22.697 [49/268] Linking static target lib/librte_telemetry.a 00:02:22.697 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.697 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:22.697 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.697 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:22.697 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:22.697 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.698 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.698 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.698 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.698 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:22.698 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.698 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:22.698 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.959 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:22.959 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.959 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:22.959 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.959 [67/268] Linking target lib/librte_log.so.24.1 00:02:22.959 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:22.959 [69/268] Linking static target lib/librte_pci.a 00:02:23.221 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:23.221 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.488 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:23.488 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:23.488 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:23.488 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.488 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.488 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.488 [78/268] Linking target lib/librte_kvargs.so.24.1 00:02:23.488 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.488 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.488 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.488 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:23.488 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:23.488 [84/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.488 [85/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.488 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:23.488 [87/268] Linking static target lib/librte_meter.a 00:02:23.488 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.488 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:23.488 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:23.488 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.488 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:23.488 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:23.747 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.747 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.747 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:23.747 [97/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.747 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:23.747 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:23.747 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:23.747 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:23.747 [102/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.747 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:23.747 [104/268] Linking static target lib/librte_ring.a 00:02:23.747 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:23.748 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.748 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.748 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:23.748 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:23.748 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:23.748 [111/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:23.748 [112/268] Linking static target lib/librte_eal.a 00:02:23.748 [113/268] Linking target lib/librte_telemetry.so.24.1 00:02:23.748 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:23.748 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.748 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:23.748 [117/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:23.748 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.748 [119/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.748 [120/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.011 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.011 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.011 [123/268] Linking static target lib/librte_rcu.a 00:02:24.011 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.011 [125/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.011 [126/268] Linking static target lib/librte_mempool.a 00:02:24.011 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.011 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.012 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.012 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:24.012 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.012 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.012 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.012 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.012 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:24.276 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.276 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.276 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.276 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.276 [140/268] Linking static target lib/librte_net.a 00:02:24.276 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.276 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.546 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.546 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.546 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.546 [146/268] Linking static target lib/librte_cmdline.a 00:02:24.546 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.546 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.546 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.546 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.546 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.546 [152/268] Linking static target lib/librte_timer.a 00:02:24.806 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.806 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.806 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.806 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.806 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.806 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.806 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.806 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.806 [161/268] Linking static target lib/librte_dmadev.a 00:02:24.806 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.065 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.065 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.065 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.065 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.065 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.065 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.065 [169/268] Linking static target lib/librte_power.a 00:02:25.065 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.065 [171/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.065 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.065 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:25.065 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.065 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.065 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.065 [177/268] Linking static target lib/librte_compressdev.a 00:02:25.065 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.065 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.065 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:25.065 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.065 [182/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:25.323 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:25.323 [184/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.323 [185/268] Linking static target lib/librte_hash.a 00:02:25.323 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.323 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:25.323 [188/268] Linking static target lib/librte_reorder.a 00:02:25.323 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:25.323 [190/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.323 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:25.323 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:25.323 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.323 [194/268] Linking static target lib/librte_mbuf.a 00:02:25.323 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:25.323 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.323 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.323 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.323 [199/268] Linking static target drivers/librte_bus_vdev.a 00:02:25.581 [200/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.581 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.581 [202/268] Linking static target lib/librte_security.a 00:02:25.581 [203/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:25.581 [204/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:25.581 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.581 [206/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.581 [207/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.581 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:25.581 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.581 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.581 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:25.581 [212/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.581 [213/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.581 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.839 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.839 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.839 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.839 [218/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.839 [219/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.839 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.839 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.839 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.839 [223/268] Linking static target lib/librte_cryptodev.a 00:02:26.097 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.097 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.097 [226/268] Linking static target lib/librte_ethdev.a 00:02:27.030 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.399 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:30.362 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.362 [230/268] Linking target lib/librte_eal.so.24.1 00:02:30.362 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:30.362 [232/268] Linking target lib/librte_ring.so.24.1 00:02:30.362 [233/268] Linking target lib/librte_meter.so.24.1 00:02:30.362 [234/268] Linking target lib/librte_pci.so.24.1 00:02:30.362 [235/268] Linking target lib/librte_timer.so.24.1 00:02:30.362 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:30.362 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:30.362 [238/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.362 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:30.362 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:30.362 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:30.362 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:30.362 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:30.362 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:30.362 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:30.362 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:30.626 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:30.626 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:30.626 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:30.626 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:30.885 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:30.885 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:30.885 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:30.885 [254/268] Linking target lib/librte_net.so.24.1 00:02:30.885 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:30.885 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:30.885 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:30.885 [258/268] Linking target lib/librte_security.so.24.1 00:02:30.885 [259/268] Linking target lib/librte_hash.so.24.1 00:02:30.885 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:31.143 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:31.143 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:31.143 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:31.143 [264/268] Linking target lib/librte_power.so.24.1 00:02:34.435 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:34.435 [266/268] Linking static target lib/librte_vhost.a 00:02:35.369 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.369 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:35.369 INFO: autodetecting backend as ninja 00:02:35.369 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:57.286 CC lib/ut/ut.o 00:02:57.286 CC lib/ut_mock/mock.o 00:02:57.286 CC lib/log/log.o 00:02:57.286 CC lib/log/log_flags.o 00:02:57.286 CC lib/log/log_deprecated.o 00:02:57.286 LIB libspdk_ut.a 00:02:57.286 LIB libspdk_ut_mock.a 00:02:57.286 LIB libspdk_log.a 00:02:57.286 SO libspdk_ut.so.2.0 00:02:57.286 SO libspdk_ut_mock.so.6.0 00:02:57.286 SO libspdk_log.so.7.1 00:02:57.286 SYMLINK libspdk_ut.so 00:02:57.286 SYMLINK libspdk_ut_mock.so 00:02:57.286 SYMLINK libspdk_log.so 00:02:57.286 CXX lib/trace_parser/trace.o 00:02:57.286 CC lib/dma/dma.o 00:02:57.286 CC lib/ioat/ioat.o 00:02:57.286 CC lib/util/base64.o 00:02:57.286 CC lib/util/bit_array.o 00:02:57.286 CC lib/util/cpuset.o 00:02:57.286 CC lib/util/crc16.o 00:02:57.286 CC lib/util/crc32.o 00:02:57.286 CC lib/util/crc32c.o 00:02:57.286 CC lib/util/crc32_ieee.o 00:02:57.286 CC lib/util/crc64.o 00:02:57.286 CC lib/util/dif.o 00:02:57.286 CC lib/util/fd.o 00:02:57.286 CC lib/util/fd_group.o 00:02:57.286 CC lib/util/file.o 00:02:57.286 CC lib/util/hexlify.o 00:02:57.286 CC lib/util/iov.o 00:02:57.286 CC lib/util/math.o 00:02:57.286 CC lib/util/net.o 00:02:57.286 CC lib/util/pipe.o 00:02:57.286 CC lib/util/strerror_tls.o 00:02:57.286 CC lib/util/string.o 00:02:57.286 CC lib/util/uuid.o 00:02:57.286 CC lib/util/xor.o 00:02:57.286 CC lib/util/md5.o 00:02:57.286 CC lib/util/zipf.o 00:02:57.286 CC lib/vfio_user/host/vfio_user_pci.o 00:02:57.286 CC lib/vfio_user/host/vfio_user.o 00:02:57.286 LIB libspdk_dma.a 00:02:57.286 SO libspdk_dma.so.5.0 00:02:57.286 SYMLINK libspdk_dma.so 00:02:57.286 LIB libspdk_ioat.a 00:02:57.286 SO libspdk_ioat.so.7.0 00:02:57.286 SYMLINK libspdk_ioat.so 00:02:57.286 LIB libspdk_vfio_user.a 00:02:57.286 SO libspdk_vfio_user.so.5.0 00:02:57.286 SYMLINK libspdk_vfio_user.so 00:02:57.286 LIB libspdk_util.a 00:02:57.286 SO libspdk_util.so.10.1 00:02:57.286 SYMLINK libspdk_util.so 00:02:57.286 CC lib/json/json_parse.o 00:02:57.286 CC lib/vmd/vmd.o 00:02:57.286 CC lib/vmd/led.o 00:02:57.286 CC lib/idxd/idxd.o 00:02:57.286 CC lib/json/json_util.o 00:02:57.286 CC lib/env_dpdk/env.o 00:02:57.286 CC lib/json/json_write.o 00:02:57.286 CC lib/idxd/idxd_user.o 00:02:57.286 CC lib/env_dpdk/memory.o 00:02:57.286 CC lib/idxd/idxd_kernel.o 00:02:57.286 CC lib/env_dpdk/pci.o 00:02:57.286 CC lib/conf/conf.o 00:02:57.286 CC lib/rdma_utils/rdma_utils.o 00:02:57.286 CC lib/env_dpdk/init.o 00:02:57.286 CC lib/env_dpdk/threads.o 00:02:57.286 CC lib/env_dpdk/pci_ioat.o 00:02:57.286 CC lib/env_dpdk/pci_virtio.o 00:02:57.286 CC lib/env_dpdk/pci_vmd.o 00:02:57.286 CC lib/env_dpdk/pci_idxd.o 00:02:57.286 CC lib/env_dpdk/pci_event.o 00:02:57.286 CC lib/env_dpdk/sigbus_handler.o 00:02:57.286 CC lib/env_dpdk/pci_dpdk.o 00:02:57.286 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:57.286 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:57.286 LIB libspdk_trace_parser.a 00:02:57.286 SO libspdk_trace_parser.so.6.0 00:02:57.286 SYMLINK libspdk_trace_parser.so 00:02:57.286 LIB libspdk_rdma_utils.a 00:02:57.286 LIB libspdk_json.a 00:02:57.286 LIB libspdk_conf.a 00:02:57.286 SO libspdk_rdma_utils.so.1.0 00:02:57.286 SO libspdk_json.so.6.0 00:02:57.286 SO libspdk_conf.so.6.0 00:02:57.286 SYMLINK libspdk_rdma_utils.so 00:02:57.286 SYMLINK libspdk_json.so 00:02:57.286 SYMLINK libspdk_conf.so 00:02:57.544 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.544 CC lib/rdma_provider/common.o 00:02:57.544 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.544 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.544 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:57.544 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.544 LIB libspdk_idxd.a 00:02:57.544 SO libspdk_idxd.so.12.1 00:02:57.544 LIB libspdk_vmd.a 00:02:57.544 SYMLINK libspdk_idxd.so 00:02:57.544 SO libspdk_vmd.so.6.0 00:02:57.801 SYMLINK libspdk_vmd.so 00:02:57.801 LIB libspdk_rdma_provider.a 00:02:57.801 SO libspdk_rdma_provider.so.7.0 00:02:57.801 LIB libspdk_jsonrpc.a 00:02:57.801 SO libspdk_jsonrpc.so.6.0 00:02:57.801 SYMLINK libspdk_rdma_provider.so 00:02:57.801 SYMLINK libspdk_jsonrpc.so 00:02:58.060 CC lib/rpc/rpc.o 00:02:58.317 LIB libspdk_rpc.a 00:02:58.317 SO libspdk_rpc.so.6.0 00:02:58.317 SYMLINK libspdk_rpc.so 00:02:58.317 CC lib/trace/trace.o 00:02:58.317 CC lib/trace/trace_flags.o 00:02:58.317 CC lib/notify/notify.o 00:02:58.317 CC lib/trace/trace_rpc.o 00:02:58.317 CC lib/keyring/keyring.o 00:02:58.317 CC lib/notify/notify_rpc.o 00:02:58.317 CC lib/keyring/keyring_rpc.o 00:02:58.575 LIB libspdk_notify.a 00:02:58.575 SO libspdk_notify.so.6.0 00:02:58.575 SYMLINK libspdk_notify.so 00:02:58.575 LIB libspdk_keyring.a 00:02:58.833 SO libspdk_keyring.so.2.0 00:02:58.833 LIB libspdk_trace.a 00:02:58.833 SO libspdk_trace.so.11.0 00:02:58.833 SYMLINK libspdk_keyring.so 00:02:58.833 SYMLINK libspdk_trace.so 00:02:58.833 CC lib/sock/sock.o 00:02:58.833 CC lib/sock/sock_rpc.o 00:02:58.833 CC lib/thread/thread.o 00:02:58.833 CC lib/thread/iobuf.o 00:02:59.090 LIB libspdk_env_dpdk.a 00:02:59.090 SO libspdk_env_dpdk.so.15.1 00:02:59.090 SYMLINK libspdk_env_dpdk.so 00:02:59.349 LIB libspdk_sock.a 00:02:59.349 SO libspdk_sock.so.10.0 00:02:59.349 SYMLINK libspdk_sock.so 00:02:59.607 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:59.607 CC lib/nvme/nvme_ctrlr.o 00:02:59.607 CC lib/nvme/nvme_fabric.o 00:02:59.607 CC lib/nvme/nvme_ns_cmd.o 00:02:59.607 CC lib/nvme/nvme_ns.o 00:02:59.607 CC lib/nvme/nvme_pcie_common.o 00:02:59.607 CC lib/nvme/nvme_pcie.o 00:02:59.607 CC lib/nvme/nvme_qpair.o 00:02:59.607 CC lib/nvme/nvme.o 00:02:59.607 CC lib/nvme/nvme_quirks.o 00:02:59.607 CC lib/nvme/nvme_transport.o 00:02:59.607 CC lib/nvme/nvme_discovery.o 00:02:59.607 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.607 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.607 CC lib/nvme/nvme_tcp.o 00:02:59.607 CC lib/nvme/nvme_opal.o 00:02:59.607 CC lib/nvme/nvme_io_msg.o 00:02:59.607 CC lib/nvme/nvme_poll_group.o 00:02:59.607 CC lib/nvme/nvme_zns.o 00:02:59.607 CC lib/nvme/nvme_stubs.o 00:02:59.607 CC lib/nvme/nvme_auth.o 00:02:59.607 CC lib/nvme/nvme_cuse.o 00:02:59.607 CC lib/nvme/nvme_vfio_user.o 00:02:59.607 CC lib/nvme/nvme_rdma.o 00:03:00.541 LIB libspdk_thread.a 00:03:00.541 SO libspdk_thread.so.11.0 00:03:00.541 SYMLINK libspdk_thread.so 00:03:00.800 CC lib/accel/accel.o 00:03:00.800 CC lib/blob/blobstore.o 00:03:00.800 CC lib/virtio/virtio.o 00:03:00.800 CC lib/fsdev/fsdev.o 00:03:00.800 CC lib/init/json_config.o 00:03:00.800 CC lib/accel/accel_rpc.o 00:03:00.800 CC lib/blob/request.o 00:03:00.800 CC lib/vfu_tgt/tgt_endpoint.o 00:03:00.800 CC lib/virtio/virtio_vhost_user.o 00:03:00.800 CC lib/fsdev/fsdev_io.o 00:03:00.800 CC lib/init/subsystem.o 00:03:00.800 CC lib/accel/accel_sw.o 00:03:00.800 CC lib/blob/zeroes.o 00:03:00.800 CC lib/vfu_tgt/tgt_rpc.o 00:03:00.800 CC lib/virtio/virtio_vfio_user.o 00:03:00.800 CC lib/blob/blob_bs_dev.o 00:03:00.800 CC lib/virtio/virtio_pci.o 00:03:00.800 CC lib/fsdev/fsdev_rpc.o 00:03:00.800 CC lib/init/subsystem_rpc.o 00:03:00.800 CC lib/init/rpc.o 00:03:01.057 LIB libspdk_init.a 00:03:01.057 SO libspdk_init.so.6.0 00:03:01.057 SYMLINK libspdk_init.so 00:03:01.057 LIB libspdk_vfu_tgt.a 00:03:01.315 SO libspdk_vfu_tgt.so.3.0 00:03:01.315 SYMLINK libspdk_vfu_tgt.so 00:03:01.315 LIB libspdk_virtio.a 00:03:01.315 SO libspdk_virtio.so.7.0 00:03:01.315 CC lib/event/app.o 00:03:01.315 CC lib/event/reactor.o 00:03:01.315 CC lib/event/log_rpc.o 00:03:01.315 CC lib/event/app_rpc.o 00:03:01.315 CC lib/event/scheduler_static.o 00:03:01.315 SYMLINK libspdk_virtio.so 00:03:01.573 LIB libspdk_fsdev.a 00:03:01.573 SO libspdk_fsdev.so.2.0 00:03:01.573 SYMLINK libspdk_fsdev.so 00:03:01.830 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:01.830 LIB libspdk_event.a 00:03:01.830 SO libspdk_event.so.14.0 00:03:01.830 SYMLINK libspdk_event.so 00:03:02.088 LIB libspdk_nvme.a 00:03:02.088 LIB libspdk_accel.a 00:03:02.088 SO libspdk_accel.so.16.0 00:03:02.088 SYMLINK libspdk_accel.so 00:03:02.088 SO libspdk_nvme.so.15.0 00:03:02.347 CC lib/bdev/bdev.o 00:03:02.347 CC lib/bdev/bdev_rpc.o 00:03:02.347 CC lib/bdev/bdev_zone.o 00:03:02.347 CC lib/bdev/part.o 00:03:02.347 CC lib/bdev/scsi_nvme.o 00:03:02.347 SYMLINK libspdk_nvme.so 00:03:02.347 LIB libspdk_fuse_dispatcher.a 00:03:02.347 SO libspdk_fuse_dispatcher.so.1.0 00:03:02.606 SYMLINK libspdk_fuse_dispatcher.so 00:03:03.988 LIB libspdk_blob.a 00:03:03.988 SO libspdk_blob.so.11.0 00:03:03.988 SYMLINK libspdk_blob.so 00:03:04.246 CC lib/lvol/lvol.o 00:03:04.246 CC lib/blobfs/blobfs.o 00:03:04.246 CC lib/blobfs/tree.o 00:03:05.185 LIB libspdk_bdev.a 00:03:05.185 LIB libspdk_blobfs.a 00:03:05.185 SO libspdk_bdev.so.17.0 00:03:05.185 SO libspdk_blobfs.so.10.0 00:03:05.185 SYMLINK libspdk_blobfs.so 00:03:05.185 SYMLINK libspdk_bdev.so 00:03:05.185 LIB libspdk_lvol.a 00:03:05.185 SO libspdk_lvol.so.10.0 00:03:05.185 SYMLINK libspdk_lvol.so 00:03:05.185 CC lib/nbd/nbd.o 00:03:05.185 CC lib/scsi/dev.o 00:03:05.185 CC lib/nbd/nbd_rpc.o 00:03:05.185 CC lib/scsi/lun.o 00:03:05.185 CC lib/ublk/ublk.o 00:03:05.185 CC lib/scsi/port.o 00:03:05.185 CC lib/nvmf/ctrlr.o 00:03:05.185 CC lib/ublk/ublk_rpc.o 00:03:05.185 CC lib/scsi/scsi.o 00:03:05.185 CC lib/nvmf/ctrlr_discovery.o 00:03:05.185 CC lib/scsi/scsi_bdev.o 00:03:05.185 CC lib/ftl/ftl_core.o 00:03:05.185 CC lib/nvmf/ctrlr_bdev.o 00:03:05.185 CC lib/scsi/scsi_pr.o 00:03:05.185 CC lib/nvmf/subsystem.o 00:03:05.185 CC lib/ftl/ftl_init.o 00:03:05.185 CC lib/nvmf/nvmf.o 00:03:05.185 CC lib/scsi/scsi_rpc.o 00:03:05.185 CC lib/ftl/ftl_layout.o 00:03:05.185 CC lib/scsi/task.o 00:03:05.185 CC lib/nvmf/nvmf_rpc.o 00:03:05.185 CC lib/nvmf/transport.o 00:03:05.185 CC lib/ftl/ftl_debug.o 00:03:05.185 CC lib/ftl/ftl_io.o 00:03:05.185 CC lib/ftl/ftl_sb.o 00:03:05.185 CC lib/nvmf/tcp.o 00:03:05.185 CC lib/ftl/ftl_l2p.o 00:03:05.185 CC lib/nvmf/stubs.o 00:03:05.185 CC lib/ftl/ftl_l2p_flat.o 00:03:05.185 CC lib/nvmf/mdns_server.o 00:03:05.185 CC lib/ftl/ftl_nv_cache.o 00:03:05.185 CC lib/nvmf/vfio_user.o 00:03:05.185 CC lib/nvmf/rdma.o 00:03:05.185 CC lib/ftl/ftl_band.o 00:03:05.185 CC lib/nvmf/auth.o 00:03:05.185 CC lib/ftl/ftl_band_ops.o 00:03:05.185 CC lib/ftl/ftl_rq.o 00:03:05.185 CC lib/ftl/ftl_writer.o 00:03:05.185 CC lib/ftl/ftl_reloc.o 00:03:05.185 CC lib/ftl/ftl_l2p_cache.o 00:03:05.185 CC lib/ftl/ftl_p2l.o 00:03:05.185 CC lib/ftl/ftl_p2l_log.o 00:03:05.185 CC lib/ftl/mngt/ftl_mngt.o 00:03:05.185 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:05.185 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:05.185 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.185 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.185 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.757 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.757 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.757 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.757 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.757 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.757 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.757 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.757 CC lib/ftl/utils/ftl_conf.o 00:03:05.757 CC lib/ftl/utils/ftl_md.o 00:03:05.757 CC lib/ftl/utils/ftl_mempool.o 00:03:05.757 CC lib/ftl/utils/ftl_bitmap.o 00:03:05.757 CC lib/ftl/utils/ftl_property.o 00:03:05.757 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:05.757 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:05.757 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:05.757 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:05.757 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:05.757 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:05.757 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:06.015 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:06.015 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:06.015 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:06.015 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:06.015 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:06.015 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:06.016 CC lib/ftl/base/ftl_base_dev.o 00:03:06.016 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.016 CC lib/ftl/ftl_trace.o 00:03:06.016 LIB libspdk_nbd.a 00:03:06.016 SO libspdk_nbd.so.7.0 00:03:06.274 SYMLINK libspdk_nbd.so 00:03:06.274 LIB libspdk_scsi.a 00:03:06.274 SO libspdk_scsi.so.9.0 00:03:06.274 LIB libspdk_ublk.a 00:03:06.274 SO libspdk_ublk.so.3.0 00:03:06.274 SYMLINK libspdk_scsi.so 00:03:06.532 SYMLINK libspdk_ublk.so 00:03:06.532 CC lib/iscsi/conn.o 00:03:06.532 CC lib/vhost/vhost.o 00:03:06.532 CC lib/iscsi/init_grp.o 00:03:06.532 CC lib/vhost/vhost_rpc.o 00:03:06.532 CC lib/iscsi/iscsi.o 00:03:06.532 CC lib/vhost/vhost_scsi.o 00:03:06.532 CC lib/iscsi/param.o 00:03:06.532 CC lib/vhost/vhost_blk.o 00:03:06.532 CC lib/iscsi/portal_grp.o 00:03:06.532 CC lib/vhost/rte_vhost_user.o 00:03:06.532 CC lib/iscsi/tgt_node.o 00:03:06.532 CC lib/iscsi/iscsi_subsystem.o 00:03:06.532 CC lib/iscsi/iscsi_rpc.o 00:03:06.532 CC lib/iscsi/task.o 00:03:06.790 LIB libspdk_ftl.a 00:03:07.048 SO libspdk_ftl.so.9.0 00:03:07.305 SYMLINK libspdk_ftl.so 00:03:07.870 LIB libspdk_vhost.a 00:03:07.870 SO libspdk_vhost.so.8.0 00:03:07.870 SYMLINK libspdk_vhost.so 00:03:07.870 LIB libspdk_nvmf.a 00:03:08.128 LIB libspdk_iscsi.a 00:03:08.128 SO libspdk_nvmf.so.20.0 00:03:08.128 SO libspdk_iscsi.so.8.0 00:03:08.128 SYMLINK libspdk_iscsi.so 00:03:08.128 SYMLINK libspdk_nvmf.so 00:03:08.386 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.386 CC module/vfu_device/vfu_virtio.o 00:03:08.386 CC module/vfu_device/vfu_virtio_blk.o 00:03:08.386 CC module/vfu_device/vfu_virtio_scsi.o 00:03:08.386 CC module/vfu_device/vfu_virtio_rpc.o 00:03:08.386 CC module/vfu_device/vfu_virtio_fs.o 00:03:08.644 CC module/accel/ioat/accel_ioat.o 00:03:08.644 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.644 CC module/accel/dsa/accel_dsa.o 00:03:08.644 CC module/accel/error/accel_error.o 00:03:08.644 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.644 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.644 CC module/accel/iaa/accel_iaa.o 00:03:08.644 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.644 CC module/accel/error/accel_error_rpc.o 00:03:08.644 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.644 CC module/blob/bdev/blob_bdev.o 00:03:08.644 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.644 CC module/fsdev/aio/fsdev_aio.o 00:03:08.644 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:08.644 CC module/sock/posix/posix.o 00:03:08.644 CC module/keyring/linux/keyring.o 00:03:08.644 CC module/fsdev/aio/linux_aio_mgr.o 00:03:08.644 CC module/keyring/linux/keyring_rpc.o 00:03:08.644 CC module/keyring/file/keyring.o 00:03:08.644 CC module/keyring/file/keyring_rpc.o 00:03:08.644 LIB libspdk_env_dpdk_rpc.a 00:03:08.644 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.644 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.902 LIB libspdk_keyring_linux.a 00:03:08.902 LIB libspdk_scheduler_gscheduler.a 00:03:08.902 LIB libspdk_keyring_file.a 00:03:08.902 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.902 SO libspdk_keyring_linux.so.1.0 00:03:08.902 SO libspdk_keyring_file.so.2.0 00:03:08.902 LIB libspdk_accel_ioat.a 00:03:08.902 LIB libspdk_accel_error.a 00:03:08.902 LIB libspdk_scheduler_dynamic.a 00:03:08.902 LIB libspdk_accel_iaa.a 00:03:08.902 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.902 SYMLINK libspdk_keyring_linux.so 00:03:08.902 SO libspdk_accel_ioat.so.6.0 00:03:08.902 SO libspdk_accel_error.so.2.0 00:03:08.902 SO libspdk_scheduler_dynamic.so.4.0 00:03:08.902 SYMLINK libspdk_keyring_file.so 00:03:08.902 SO libspdk_accel_iaa.so.3.0 00:03:08.902 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.902 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.902 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.902 SYMLINK libspdk_accel_ioat.so 00:03:08.902 SYMLINK libspdk_accel_error.so 00:03:08.902 LIB libspdk_blob_bdev.a 00:03:08.902 SYMLINK libspdk_accel_iaa.so 00:03:08.902 SO libspdk_blob_bdev.so.11.0 00:03:08.902 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.902 SYMLINK libspdk_blob_bdev.so 00:03:08.902 LIB libspdk_accel_dsa.a 00:03:09.160 SO libspdk_accel_dsa.so.5.0 00:03:09.160 SYMLINK libspdk_accel_dsa.so 00:03:09.160 LIB libspdk_vfu_device.a 00:03:09.160 SO libspdk_vfu_device.so.3.0 00:03:09.160 CC module/bdev/delay/vbdev_delay.o 00:03:09.160 CC module/bdev/gpt/gpt.o 00:03:09.160 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.160 CC module/bdev/gpt/vbdev_gpt.o 00:03:09.160 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:09.160 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.160 CC module/bdev/malloc/bdev_malloc.o 00:03:09.160 CC module/bdev/aio/bdev_aio.o 00:03:09.160 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.160 CC module/bdev/null/bdev_null.o 00:03:09.160 CC module/bdev/error/vbdev_error.o 00:03:09.160 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.160 CC module/bdev/null/bdev_null_rpc.o 00:03:09.161 CC module/bdev/error/vbdev_error_rpc.o 00:03:09.161 CC module/blobfs/bdev/blobfs_bdev.o 00:03:09.161 CC module/bdev/passthru/vbdev_passthru.o 00:03:09.161 CC module/bdev/ftl/bdev_ftl.o 00:03:09.161 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:09.161 CC module/bdev/raid/bdev_raid.o 00:03:09.161 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:09.161 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.161 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:09.161 CC module/bdev/split/vbdev_split.o 00:03:09.161 CC module/bdev/raid/bdev_raid_sb.o 00:03:09.161 CC module/bdev/raid/raid0.o 00:03:09.161 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.161 CC module/bdev/nvme/bdev_nvme.o 00:03:09.161 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.161 CC module/bdev/raid/raid1.o 00:03:09.161 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:09.161 CC module/bdev/raid/concat.o 00:03:09.161 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.161 CC module/bdev/nvme/nvme_rpc.o 00:03:09.161 CC module/bdev/nvme/bdev_mdns_client.o 00:03:09.161 CC module/bdev/iscsi/bdev_iscsi.o 00:03:09.161 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:09.161 CC module/bdev/nvme/vbdev_opal.o 00:03:09.161 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:09.161 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:09.161 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.161 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:09.161 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:09.422 SYMLINK libspdk_vfu_device.so 00:03:09.422 LIB libspdk_fsdev_aio.a 00:03:09.422 SO libspdk_fsdev_aio.so.1.0 00:03:09.422 LIB libspdk_sock_posix.a 00:03:09.680 SYMLINK libspdk_fsdev_aio.so 00:03:09.680 SO libspdk_sock_posix.so.6.0 00:03:09.680 LIB libspdk_blobfs_bdev.a 00:03:09.680 SO libspdk_blobfs_bdev.so.6.0 00:03:09.680 SYMLINK libspdk_sock_posix.so 00:03:09.680 LIB libspdk_bdev_split.a 00:03:09.680 LIB libspdk_bdev_passthru.a 00:03:09.680 SO libspdk_bdev_split.so.6.0 00:03:09.680 SYMLINK libspdk_blobfs_bdev.so 00:03:09.680 LIB libspdk_bdev_null.a 00:03:09.680 SO libspdk_bdev_passthru.so.6.0 00:03:09.680 LIB libspdk_bdev_ftl.a 00:03:09.680 LIB libspdk_bdev_error.a 00:03:09.680 SO libspdk_bdev_null.so.6.0 00:03:09.680 SO libspdk_bdev_ftl.so.6.0 00:03:09.680 LIB libspdk_bdev_gpt.a 00:03:09.680 SYMLINK libspdk_bdev_split.so 00:03:09.680 SO libspdk_bdev_error.so.6.0 00:03:09.680 SO libspdk_bdev_gpt.so.6.0 00:03:09.680 SYMLINK libspdk_bdev_passthru.so 00:03:09.938 LIB libspdk_bdev_aio.a 00:03:09.938 SYMLINK libspdk_bdev_null.so 00:03:09.938 SYMLINK libspdk_bdev_ftl.so 00:03:09.938 SO libspdk_bdev_aio.so.6.0 00:03:09.938 SYMLINK libspdk_bdev_error.so 00:03:09.938 LIB libspdk_bdev_zone_block.a 00:03:09.938 SYMLINK libspdk_bdev_gpt.so 00:03:09.938 LIB libspdk_bdev_iscsi.a 00:03:09.938 SO libspdk_bdev_zone_block.so.6.0 00:03:09.938 LIB libspdk_bdev_malloc.a 00:03:09.938 SO libspdk_bdev_iscsi.so.6.0 00:03:09.938 SYMLINK libspdk_bdev_aio.so 00:03:09.938 LIB libspdk_bdev_delay.a 00:03:09.938 SO libspdk_bdev_malloc.so.6.0 00:03:09.938 SO libspdk_bdev_delay.so.6.0 00:03:09.938 SYMLINK libspdk_bdev_zone_block.so 00:03:09.938 SYMLINK libspdk_bdev_iscsi.so 00:03:09.938 SYMLINK libspdk_bdev_malloc.so 00:03:09.938 SYMLINK libspdk_bdev_delay.so 00:03:09.938 LIB libspdk_bdev_lvol.a 00:03:09.938 SO libspdk_bdev_lvol.so.6.0 00:03:09.938 LIB libspdk_bdev_virtio.a 00:03:09.938 SO libspdk_bdev_virtio.so.6.0 00:03:09.938 SYMLINK libspdk_bdev_lvol.so 00:03:10.197 SYMLINK libspdk_bdev_virtio.so 00:03:10.462 LIB libspdk_bdev_raid.a 00:03:10.462 SO libspdk_bdev_raid.so.6.0 00:03:10.720 SYMLINK libspdk_bdev_raid.so 00:03:12.100 LIB libspdk_bdev_nvme.a 00:03:12.100 SO libspdk_bdev_nvme.so.7.1 00:03:12.100 SYMLINK libspdk_bdev_nvme.so 00:03:12.357 CC module/event/subsystems/vmd/vmd.o 00:03:12.357 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:12.357 CC module/event/subsystems/fsdev/fsdev.o 00:03:12.357 CC module/event/subsystems/iobuf/iobuf.o 00:03:12.357 CC module/event/subsystems/keyring/keyring.o 00:03:12.357 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:12.357 CC module/event/subsystems/scheduler/scheduler.o 00:03:12.357 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:12.357 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:12.357 CC module/event/subsystems/sock/sock.o 00:03:12.615 LIB libspdk_event_keyring.a 00:03:12.615 LIB libspdk_event_vhost_blk.a 00:03:12.615 LIB libspdk_event_scheduler.a 00:03:12.615 LIB libspdk_event_fsdev.a 00:03:12.615 LIB libspdk_event_vfu_tgt.a 00:03:12.615 LIB libspdk_event_vmd.a 00:03:12.615 LIB libspdk_event_sock.a 00:03:12.615 SO libspdk_event_keyring.so.1.0 00:03:12.615 SO libspdk_event_fsdev.so.1.0 00:03:12.615 SO libspdk_event_vhost_blk.so.3.0 00:03:12.615 SO libspdk_event_scheduler.so.4.0 00:03:12.615 SO libspdk_event_vfu_tgt.so.3.0 00:03:12.615 LIB libspdk_event_iobuf.a 00:03:12.615 SO libspdk_event_vmd.so.6.0 00:03:12.615 SO libspdk_event_sock.so.5.0 00:03:12.615 SO libspdk_event_iobuf.so.3.0 00:03:12.615 SYMLINK libspdk_event_keyring.so 00:03:12.615 SYMLINK libspdk_event_vhost_blk.so 00:03:12.615 SYMLINK libspdk_event_fsdev.so 00:03:12.615 SYMLINK libspdk_event_scheduler.so 00:03:12.615 SYMLINK libspdk_event_vfu_tgt.so 00:03:12.615 SYMLINK libspdk_event_sock.so 00:03:12.615 SYMLINK libspdk_event_vmd.so 00:03:12.615 SYMLINK libspdk_event_iobuf.so 00:03:12.873 CC module/event/subsystems/accel/accel.o 00:03:12.873 LIB libspdk_event_accel.a 00:03:12.873 SO libspdk_event_accel.so.6.0 00:03:13.131 SYMLINK libspdk_event_accel.so 00:03:13.131 CC module/event/subsystems/bdev/bdev.o 00:03:13.389 LIB libspdk_event_bdev.a 00:03:13.389 SO libspdk_event_bdev.so.6.0 00:03:13.389 SYMLINK libspdk_event_bdev.so 00:03:13.646 CC module/event/subsystems/ublk/ublk.o 00:03:13.646 CC module/event/subsystems/scsi/scsi.o 00:03:13.646 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.646 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.646 CC module/event/subsystems/nbd/nbd.o 00:03:13.904 LIB libspdk_event_ublk.a 00:03:13.904 LIB libspdk_event_nbd.a 00:03:13.904 SO libspdk_event_ublk.so.3.0 00:03:13.904 SO libspdk_event_nbd.so.6.0 00:03:13.904 LIB libspdk_event_scsi.a 00:03:13.904 SO libspdk_event_scsi.so.6.0 00:03:13.904 SYMLINK libspdk_event_ublk.so 00:03:13.904 SYMLINK libspdk_event_nbd.so 00:03:13.904 SYMLINK libspdk_event_scsi.so 00:03:13.904 LIB libspdk_event_nvmf.a 00:03:13.904 SO libspdk_event_nvmf.so.6.0 00:03:13.904 SYMLINK libspdk_event_nvmf.so 00:03:14.161 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.161 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.161 LIB libspdk_event_vhost_scsi.a 00:03:14.161 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.161 LIB libspdk_event_iscsi.a 00:03:14.161 SO libspdk_event_iscsi.so.6.0 00:03:14.161 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.161 SYMLINK libspdk_event_iscsi.so 00:03:14.419 SO libspdk.so.6.0 00:03:14.419 SYMLINK libspdk.so 00:03:14.683 CXX app/trace/trace.o 00:03:14.683 CC app/trace_record/trace_record.o 00:03:14.683 CC app/spdk_nvme_discover/discovery_aer.o 00:03:14.683 CC app/spdk_nvme_perf/perf.o 00:03:14.683 CC app/spdk_top/spdk_top.o 00:03:14.683 TEST_HEADER include/spdk/accel.h 00:03:14.683 TEST_HEADER include/spdk/accel_module.h 00:03:14.683 TEST_HEADER include/spdk/assert.h 00:03:14.683 CC app/spdk_nvme_identify/identify.o 00:03:14.683 CC app/spdk_lspci/spdk_lspci.o 00:03:14.683 TEST_HEADER include/spdk/barrier.h 00:03:14.683 CC test/rpc_client/rpc_client_test.o 00:03:14.683 TEST_HEADER include/spdk/base64.h 00:03:14.683 TEST_HEADER include/spdk/bdev.h 00:03:14.683 TEST_HEADER include/spdk/bdev_module.h 00:03:14.683 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.683 TEST_HEADER include/spdk/bit_array.h 00:03:14.683 TEST_HEADER include/spdk/bit_pool.h 00:03:14.683 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.683 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.683 TEST_HEADER include/spdk/blobfs.h 00:03:14.683 TEST_HEADER include/spdk/conf.h 00:03:14.683 TEST_HEADER include/spdk/blob.h 00:03:14.683 TEST_HEADER include/spdk/config.h 00:03:14.683 TEST_HEADER include/spdk/cpuset.h 00:03:14.683 TEST_HEADER include/spdk/crc32.h 00:03:14.683 TEST_HEADER include/spdk/crc16.h 00:03:14.683 TEST_HEADER include/spdk/crc64.h 00:03:14.683 TEST_HEADER include/spdk/dif.h 00:03:14.683 TEST_HEADER include/spdk/dma.h 00:03:14.683 TEST_HEADER include/spdk/endian.h 00:03:14.683 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.683 TEST_HEADER include/spdk/env.h 00:03:14.683 TEST_HEADER include/spdk/event.h 00:03:14.683 TEST_HEADER include/spdk/fd.h 00:03:14.683 TEST_HEADER include/spdk/fd_group.h 00:03:14.683 TEST_HEADER include/spdk/file.h 00:03:14.683 TEST_HEADER include/spdk/fsdev.h 00:03:14.683 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.683 TEST_HEADER include/spdk/ftl.h 00:03:14.683 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:14.683 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.683 TEST_HEADER include/spdk/hexlify.h 00:03:14.683 TEST_HEADER include/spdk/histogram_data.h 00:03:14.683 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.683 TEST_HEADER include/spdk/idxd.h 00:03:14.683 TEST_HEADER include/spdk/init.h 00:03:14.683 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.683 TEST_HEADER include/spdk/ioat.h 00:03:14.683 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.683 TEST_HEADER include/spdk/json.h 00:03:14.683 TEST_HEADER include/spdk/keyring.h 00:03:14.683 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.683 TEST_HEADER include/spdk/keyring_module.h 00:03:14.683 TEST_HEADER include/spdk/likely.h 00:03:14.683 TEST_HEADER include/spdk/lvol.h 00:03:14.683 TEST_HEADER include/spdk/log.h 00:03:14.683 TEST_HEADER include/spdk/memory.h 00:03:14.683 TEST_HEADER include/spdk/md5.h 00:03:14.683 TEST_HEADER include/spdk/mmio.h 00:03:14.683 TEST_HEADER include/spdk/nbd.h 00:03:14.683 TEST_HEADER include/spdk/net.h 00:03:14.683 TEST_HEADER include/spdk/notify.h 00:03:14.683 TEST_HEADER include/spdk/nvme.h 00:03:14.683 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.683 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.683 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.683 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.683 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.683 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.683 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.683 TEST_HEADER include/spdk/nvmf.h 00:03:14.683 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.683 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.683 TEST_HEADER include/spdk/opal.h 00:03:14.683 TEST_HEADER include/spdk/pci_ids.h 00:03:14.683 TEST_HEADER include/spdk/opal_spec.h 00:03:14.683 TEST_HEADER include/spdk/queue.h 00:03:14.683 TEST_HEADER include/spdk/pipe.h 00:03:14.683 TEST_HEADER include/spdk/reduce.h 00:03:14.683 TEST_HEADER include/spdk/rpc.h 00:03:14.683 TEST_HEADER include/spdk/scsi.h 00:03:14.683 TEST_HEADER include/spdk/scheduler.h 00:03:14.683 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.683 TEST_HEADER include/spdk/sock.h 00:03:14.683 TEST_HEADER include/spdk/stdinc.h 00:03:14.683 TEST_HEADER include/spdk/string.h 00:03:14.683 TEST_HEADER include/spdk/thread.h 00:03:14.683 TEST_HEADER include/spdk/trace.h 00:03:14.683 TEST_HEADER include/spdk/trace_parser.h 00:03:14.683 TEST_HEADER include/spdk/tree.h 00:03:14.683 TEST_HEADER include/spdk/ublk.h 00:03:14.683 TEST_HEADER include/spdk/util.h 00:03:14.683 TEST_HEADER include/spdk/uuid.h 00:03:14.683 TEST_HEADER include/spdk/version.h 00:03:14.683 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.683 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.683 TEST_HEADER include/spdk/vhost.h 00:03:14.683 TEST_HEADER include/spdk/vmd.h 00:03:14.683 TEST_HEADER include/spdk/xor.h 00:03:14.683 TEST_HEADER include/spdk/zipf.h 00:03:14.683 CXX test/cpp_headers/accel.o 00:03:14.683 CC app/spdk_dd/spdk_dd.o 00:03:14.683 CXX test/cpp_headers/accel_module.o 00:03:14.683 CXX test/cpp_headers/assert.o 00:03:14.683 CXX test/cpp_headers/barrier.o 00:03:14.683 CXX test/cpp_headers/base64.o 00:03:14.683 CXX test/cpp_headers/bdev.o 00:03:14.683 CXX test/cpp_headers/bdev_module.o 00:03:14.683 CXX test/cpp_headers/bdev_zone.o 00:03:14.683 CXX test/cpp_headers/bit_array.o 00:03:14.683 CXX test/cpp_headers/bit_pool.o 00:03:14.683 CXX test/cpp_headers/blob_bdev.o 00:03:14.683 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.683 CXX test/cpp_headers/blobfs_bdev.o 00:03:14.683 CXX test/cpp_headers/blobfs.o 00:03:14.683 CXX test/cpp_headers/blob.o 00:03:14.683 CXX test/cpp_headers/conf.o 00:03:14.683 CC app/iscsi_tgt/iscsi_tgt.o 00:03:14.683 CXX test/cpp_headers/config.o 00:03:14.683 CXX test/cpp_headers/cpuset.o 00:03:14.683 CXX test/cpp_headers/crc16.o 00:03:14.683 CC app/nvmf_tgt/nvmf_main.o 00:03:14.683 CXX test/cpp_headers/crc32.o 00:03:14.683 CC app/spdk_tgt/spdk_tgt.o 00:03:14.683 CC test/env/pci/pci_ut.o 00:03:14.683 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.683 CC test/app/jsoncat/jsoncat.o 00:03:14.683 CC test/app/histogram_perf/histogram_perf.o 00:03:14.683 CC test/thread/poller_perf/poller_perf.o 00:03:14.683 CC examples/util/zipf/zipf.o 00:03:14.683 CC test/env/memory/memory_ut.o 00:03:14.683 CC examples/ioat/perf/perf.o 00:03:14.683 CC test/app/stub/stub.o 00:03:14.683 CC examples/ioat/verify/verify.o 00:03:14.683 CC app/fio/nvme/fio_plugin.o 00:03:14.683 CC test/env/vtophys/vtophys.o 00:03:14.683 CC test/dma/test_dma/test_dma.o 00:03:14.683 CC test/app/bdev_svc/bdev_svc.o 00:03:14.683 CC app/fio/bdev/fio_plugin.o 00:03:14.944 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.944 LINK spdk_lspci 00:03:14.944 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.944 LINK rpc_client_test 00:03:14.944 LINK spdk_nvme_discover 00:03:14.944 LINK jsoncat 00:03:14.944 LINK histogram_perf 00:03:15.210 CXX test/cpp_headers/crc64.o 00:03:15.210 LINK env_dpdk_post_init 00:03:15.210 LINK spdk_trace_record 00:03:15.210 LINK nvmf_tgt 00:03:15.210 CXX test/cpp_headers/dif.o 00:03:15.210 LINK poller_perf 00:03:15.210 LINK interrupt_tgt 00:03:15.210 LINK zipf 00:03:15.210 CXX test/cpp_headers/dma.o 00:03:15.210 LINK vtophys 00:03:15.210 CXX test/cpp_headers/endian.o 00:03:15.210 CXX test/cpp_headers/env_dpdk.o 00:03:15.210 CXX test/cpp_headers/env.o 00:03:15.210 CXX test/cpp_headers/event.o 00:03:15.210 CXX test/cpp_headers/fd_group.o 00:03:15.210 CXX test/cpp_headers/fd.o 00:03:15.210 CXX test/cpp_headers/file.o 00:03:15.210 CXX test/cpp_headers/fsdev.o 00:03:15.210 LINK iscsi_tgt 00:03:15.210 CXX test/cpp_headers/fsdev_module.o 00:03:15.210 LINK stub 00:03:15.210 CXX test/cpp_headers/ftl.o 00:03:15.210 CXX test/cpp_headers/fuse_dispatcher.o 00:03:15.210 LINK spdk_tgt 00:03:15.210 CXX test/cpp_headers/gpt_spec.o 00:03:15.210 CXX test/cpp_headers/hexlify.o 00:03:15.210 LINK verify 00:03:15.210 CXX test/cpp_headers/histogram_data.o 00:03:15.210 LINK bdev_svc 00:03:15.210 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:15.210 LINK ioat_perf 00:03:15.210 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:15.210 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:15.210 CXX test/cpp_headers/idxd.o 00:03:15.210 CXX test/cpp_headers/idxd_spec.o 00:03:15.475 CXX test/cpp_headers/init.o 00:03:15.475 CXX test/cpp_headers/ioat.o 00:03:15.475 LINK spdk_dd 00:03:15.475 CXX test/cpp_headers/ioat_spec.o 00:03:15.475 CXX test/cpp_headers/iscsi_spec.o 00:03:15.475 CXX test/cpp_headers/json.o 00:03:15.475 CXX test/cpp_headers/jsonrpc.o 00:03:15.476 CXX test/cpp_headers/keyring.o 00:03:15.476 LINK spdk_trace 00:03:15.476 CXX test/cpp_headers/keyring_module.o 00:03:15.476 LINK pci_ut 00:03:15.476 CXX test/cpp_headers/likely.o 00:03:15.476 CXX test/cpp_headers/log.o 00:03:15.476 CXX test/cpp_headers/lvol.o 00:03:15.476 CXX test/cpp_headers/md5.o 00:03:15.476 CXX test/cpp_headers/memory.o 00:03:15.476 CXX test/cpp_headers/mmio.o 00:03:15.476 CXX test/cpp_headers/nbd.o 00:03:15.476 CXX test/cpp_headers/net.o 00:03:15.476 CXX test/cpp_headers/notify.o 00:03:15.476 CXX test/cpp_headers/nvme.o 00:03:15.741 CXX test/cpp_headers/nvme_intel.o 00:03:15.741 CXX test/cpp_headers/nvme_ocssd.o 00:03:15.741 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:15.741 CXX test/cpp_headers/nvme_spec.o 00:03:15.741 CXX test/cpp_headers/nvme_zns.o 00:03:15.741 CXX test/cpp_headers/nvmf_cmd.o 00:03:15.741 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:15.741 CXX test/cpp_headers/nvmf.o 00:03:15.741 LINK nvme_fuzz 00:03:15.741 CXX test/cpp_headers/nvmf_spec.o 00:03:15.741 CXX test/cpp_headers/nvmf_transport.o 00:03:15.741 LINK spdk_bdev 00:03:15.741 CC test/event/event_perf/event_perf.o 00:03:15.741 CXX test/cpp_headers/opal.o 00:03:15.741 CXX test/cpp_headers/opal_spec.o 00:03:15.741 CXX test/cpp_headers/pci_ids.o 00:03:15.741 CC test/event/reactor/reactor.o 00:03:15.741 CC examples/sock/hello_world/hello_sock.o 00:03:16.002 CC examples/idxd/perf/perf.o 00:03:16.002 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.002 LINK spdk_nvme 00:03:16.002 CXX test/cpp_headers/pipe.o 00:03:16.002 CC examples/vmd/led/led.o 00:03:16.002 CXX test/cpp_headers/queue.o 00:03:16.002 CXX test/cpp_headers/reduce.o 00:03:16.002 LINK test_dma 00:03:16.002 CC test/event/reactor_perf/reactor_perf.o 00:03:16.002 CC examples/thread/thread/thread_ex.o 00:03:16.002 CXX test/cpp_headers/rpc.o 00:03:16.002 CXX test/cpp_headers/scheduler.o 00:03:16.002 CXX test/cpp_headers/scsi.o 00:03:16.002 CC test/event/app_repeat/app_repeat.o 00:03:16.002 CXX test/cpp_headers/scsi_spec.o 00:03:16.002 CXX test/cpp_headers/sock.o 00:03:16.002 CXX test/cpp_headers/stdinc.o 00:03:16.002 CXX test/cpp_headers/string.o 00:03:16.002 CXX test/cpp_headers/thread.o 00:03:16.002 CXX test/cpp_headers/trace.o 00:03:16.002 CXX test/cpp_headers/trace_parser.o 00:03:16.002 CXX test/cpp_headers/tree.o 00:03:16.002 CXX test/cpp_headers/ublk.o 00:03:16.002 CXX test/cpp_headers/util.o 00:03:16.002 CXX test/cpp_headers/uuid.o 00:03:16.002 CC test/event/scheduler/scheduler.o 00:03:16.002 CXX test/cpp_headers/version.o 00:03:16.002 LINK vhost_fuzz 00:03:16.002 LINK spdk_nvme_perf 00:03:16.002 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.002 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.002 CXX test/cpp_headers/vhost.o 00:03:16.002 CXX test/cpp_headers/vmd.o 00:03:16.002 CXX test/cpp_headers/xor.o 00:03:16.002 LINK mem_callbacks 00:03:16.260 CXX test/cpp_headers/zipf.o 00:03:16.260 LINK event_perf 00:03:16.260 LINK spdk_nvme_identify 00:03:16.260 LINK lsvmd 00:03:16.260 LINK reactor 00:03:16.260 CC app/vhost/vhost.o 00:03:16.260 LINK led 00:03:16.260 LINK reactor_perf 00:03:16.260 LINK spdk_top 00:03:16.260 LINK app_repeat 00:03:16.260 LINK hello_sock 00:03:16.519 LINK thread 00:03:16.519 LINK scheduler 00:03:16.519 LINK idxd_perf 00:03:16.519 CC test/nvme/startup/startup.o 00:03:16.519 CC test/nvme/reserve/reserve.o 00:03:16.519 CC test/nvme/boot_partition/boot_partition.o 00:03:16.519 CC test/nvme/reset/reset.o 00:03:16.519 CC test/nvme/cuse/cuse.o 00:03:16.519 CC test/nvme/err_injection/err_injection.o 00:03:16.519 CC test/nvme/simple_copy/simple_copy.o 00:03:16.519 CC test/nvme/sgl/sgl.o 00:03:16.519 CC test/nvme/aer/aer.o 00:03:16.519 CC test/nvme/e2edp/nvme_dp.o 00:03:16.519 CC test/nvme/compliance/nvme_compliance.o 00:03:16.519 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.519 CC test/nvme/overhead/overhead.o 00:03:16.519 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.519 CC test/nvme/fdp/fdp.o 00:03:16.519 CC test/nvme/connect_stress/connect_stress.o 00:03:16.519 LINK vhost 00:03:16.519 CC test/accel/dif/dif.o 00:03:16.519 CC test/blobfs/mkfs/mkfs.o 00:03:16.519 CC test/lvol/esnap/esnap.o 00:03:16.778 LINK startup 00:03:16.778 LINK boot_partition 00:03:16.778 LINK connect_stress 00:03:16.778 CC examples/nvme/abort/abort.o 00:03:16.778 CC examples/nvme/reconnect/reconnect.o 00:03:16.778 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.778 CC examples/nvme/hello_world/hello_world.o 00:03:16.778 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.778 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.778 CC examples/nvme/hotplug/hotplug.o 00:03:16.778 CC examples/nvme/arbitration/arbitration.o 00:03:16.778 LINK err_injection 00:03:16.778 LINK fused_ordering 00:03:16.778 LINK simple_copy 00:03:16.778 CC examples/accel/perf/accel_perf.o 00:03:17.064 LINK mkfs 00:03:17.064 LINK reserve 00:03:17.064 LINK sgl 00:03:17.064 LINK doorbell_aers 00:03:17.064 LINK aer 00:03:17.064 LINK memory_ut 00:03:17.064 LINK reset 00:03:17.064 LINK nvme_compliance 00:03:17.064 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:17.064 CC examples/blob/cli/blobcli.o 00:03:17.064 CC examples/blob/hello_world/hello_blob.o 00:03:17.064 LINK overhead 00:03:17.064 LINK nvme_dp 00:03:17.064 LINK fdp 00:03:17.064 LINK pmr_persistence 00:03:17.064 LINK cmb_copy 00:03:17.064 LINK hotplug 00:03:17.064 LINK hello_world 00:03:17.394 LINK reconnect 00:03:17.394 LINK arbitration 00:03:17.394 LINK hello_fsdev 00:03:17.394 LINK abort 00:03:17.394 LINK hello_blob 00:03:17.394 LINK nvme_manage 00:03:17.680 LINK dif 00:03:17.680 LINK accel_perf 00:03:17.680 LINK blobcli 00:03:17.937 LINK iscsi_fuzz 00:03:17.937 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.937 CC test/bdev/bdevio/bdevio.o 00:03:17.937 CC examples/bdev/bdevperf/bdevperf.o 00:03:18.195 LINK cuse 00:03:18.195 LINK hello_bdev 00:03:18.195 LINK bdevio 00:03:18.759 LINK bdevperf 00:03:19.017 CC examples/nvmf/nvmf/nvmf.o 00:03:19.275 LINK nvmf 00:03:21.804 LINK esnap 00:03:22.063 00:03:22.063 real 1m10.839s 00:03:22.063 user 11m53.507s 00:03:22.063 sys 2m37.188s 00:03:22.063 08:53:59 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:22.063 08:53:59 make -- common/autotest_common.sh@10 -- $ set +x 00:03:22.063 ************************************ 00:03:22.063 END TEST make 00:03:22.063 ************************************ 00:03:22.063 08:53:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:22.063 08:53:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:22.063 08:53:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:22.063 08:53:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.063 08:53:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:22.063 08:53:59 -- pm/common@44 -- $ pid=3127613 00:03:22.063 08:53:59 -- pm/common@50 -- $ kill -TERM 3127613 00:03:22.063 08:53:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.063 08:53:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:22.063 08:53:59 -- pm/common@44 -- $ pid=3127615 00:03:22.063 08:53:59 -- pm/common@50 -- $ kill -TERM 3127615 00:03:22.063 08:53:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.063 08:53:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:22.063 08:53:59 -- pm/common@44 -- $ pid=3127617 00:03:22.063 08:53:59 -- pm/common@50 -- $ kill -TERM 3127617 00:03:22.063 08:53:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.063 08:53:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:22.063 08:53:59 -- pm/common@44 -- $ pid=3127646 00:03:22.063 08:53:59 -- pm/common@50 -- $ sudo -E kill -TERM 3127646 00:03:22.331 08:53:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:22.332 08:53:59 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:22.332 08:53:59 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:22.332 08:53:59 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:22.332 08:53:59 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:22.332 08:53:59 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:22.332 08:53:59 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.332 08:53:59 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.332 08:53:59 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.332 08:53:59 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.332 08:53:59 -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.332 08:53:59 -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.332 08:53:59 -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.332 08:53:59 -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.332 08:53:59 -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.332 08:53:59 -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.332 08:53:59 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.332 08:53:59 -- scripts/common.sh@344 -- # case "$op" in 00:03:22.332 08:53:59 -- scripts/common.sh@345 -- # : 1 00:03:22.332 08:53:59 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.332 08:53:59 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.332 08:53:59 -- scripts/common.sh@365 -- # decimal 1 00:03:22.332 08:53:59 -- scripts/common.sh@353 -- # local d=1 00:03:22.332 08:53:59 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.332 08:53:59 -- scripts/common.sh@355 -- # echo 1 00:03:22.332 08:53:59 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.332 08:53:59 -- scripts/common.sh@366 -- # decimal 2 00:03:22.332 08:53:59 -- scripts/common.sh@353 -- # local d=2 00:03:22.332 08:53:59 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.332 08:53:59 -- scripts/common.sh@355 -- # echo 2 00:03:22.332 08:53:59 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.332 08:53:59 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.332 08:53:59 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.332 08:53:59 -- scripts/common.sh@368 -- # return 0 00:03:22.332 08:53:59 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.332 08:53:59 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:22.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.332 --rc genhtml_branch_coverage=1 00:03:22.332 --rc genhtml_function_coverage=1 00:03:22.332 --rc genhtml_legend=1 00:03:22.332 --rc geninfo_all_blocks=1 00:03:22.332 --rc geninfo_unexecuted_blocks=1 00:03:22.332 00:03:22.332 ' 00:03:22.332 08:53:59 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:22.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.332 --rc genhtml_branch_coverage=1 00:03:22.332 --rc genhtml_function_coverage=1 00:03:22.332 --rc genhtml_legend=1 00:03:22.333 --rc geninfo_all_blocks=1 00:03:22.333 --rc geninfo_unexecuted_blocks=1 00:03:22.333 00:03:22.333 ' 00:03:22.333 08:53:59 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:22.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.333 --rc genhtml_branch_coverage=1 00:03:22.333 --rc genhtml_function_coverage=1 00:03:22.333 --rc genhtml_legend=1 00:03:22.333 --rc geninfo_all_blocks=1 00:03:22.333 --rc geninfo_unexecuted_blocks=1 00:03:22.333 00:03:22.333 ' 00:03:22.333 08:53:59 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:22.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.333 --rc genhtml_branch_coverage=1 00:03:22.333 --rc genhtml_function_coverage=1 00:03:22.333 --rc genhtml_legend=1 00:03:22.333 --rc geninfo_all_blocks=1 00:03:22.333 --rc geninfo_unexecuted_blocks=1 00:03:22.333 00:03:22.333 ' 00:03:22.333 08:53:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:22.333 08:53:59 -- nvmf/common.sh@7 -- # uname -s 00:03:22.333 08:53:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.333 08:53:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.333 08:53:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.333 08:53:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.333 08:53:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.333 08:53:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.333 08:53:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.333 08:53:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.333 08:53:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.333 08:53:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.333 08:53:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:22.333 08:53:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:22.333 08:53:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.333 08:53:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.333 08:53:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:22.333 08:53:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:22.333 08:53:59 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:22.333 08:53:59 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:22.333 08:53:59 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.333 08:53:59 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.334 08:53:59 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.334 08:53:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.334 08:53:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.334 08:53:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.334 08:53:59 -- paths/export.sh@5 -- # export PATH 00:03:22.334 08:53:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.334 08:53:59 -- nvmf/common.sh@51 -- # : 0 00:03:22.334 08:53:59 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:22.334 08:53:59 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:22.334 08:53:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:22.334 08:53:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.334 08:53:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.334 08:53:59 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:22.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:22.334 08:53:59 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:22.334 08:53:59 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:22.334 08:53:59 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:22.334 08:53:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.334 08:53:59 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.334 08:53:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.334 08:53:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.334 08:53:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:22.334 08:53:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.334 08:53:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:22.334 08:53:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:22.335 08:53:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:22.335 08:53:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:22.335 08:53:59 -- spdk/autotest.sh@48 -- # udevadm_pid=3187049 00:03:22.335 08:53:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.335 08:53:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:22.335 08:53:59 -- pm/common@17 -- # local monitor 00:03:22.335 08:53:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.335 08:53:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.335 08:53:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.335 08:53:59 -- pm/common@21 -- # date +%s 00:03:22.335 08:53:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.335 08:53:59 -- pm/common@21 -- # date +%s 00:03:22.335 08:53:59 -- pm/common@25 -- # sleep 1 00:03:22.335 08:53:59 -- pm/common@21 -- # date +%s 00:03:22.335 08:53:59 -- pm/common@21 -- # date +%s 00:03:22.335 08:53:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732089239 00:03:22.335 08:53:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732089239 00:03:22.335 08:53:59 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732089239 00:03:22.335 08:53:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732089239 00:03:22.335 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732089239_collect-cpu-load.pm.log 00:03:22.335 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732089239_collect-vmstat.pm.log 00:03:22.336 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732089239_collect-cpu-temp.pm.log 00:03:22.336 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732089239_collect-bmc-pm.bmc.pm.log 00:03:23.271 08:54:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:23.271 08:54:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:23.271 08:54:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:23.271 08:54:00 -- common/autotest_common.sh@10 -- # set +x 00:03:23.271 08:54:00 -- spdk/autotest.sh@59 -- # create_test_list 00:03:23.271 08:54:00 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:23.271 08:54:00 -- common/autotest_common.sh@10 -- # set +x 00:03:23.528 08:54:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:23.528 08:54:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.528 08:54:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.528 08:54:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:23.528 08:54:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.528 08:54:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:23.528 08:54:00 -- common/autotest_common.sh@1455 -- # uname 00:03:23.528 08:54:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:23.528 08:54:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:23.528 08:54:00 -- common/autotest_common.sh@1475 -- # uname 00:03:23.528 08:54:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:23.528 08:54:00 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:23.528 08:54:00 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:23.528 lcov: LCOV version 1.15 00:03:23.528 08:54:00 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:41.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:03.552 08:54:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:03.552 08:54:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.552 08:54:37 -- common/autotest_common.sh@10 -- # set +x 00:04:03.552 08:54:37 -- spdk/autotest.sh@78 -- # rm -f 00:04:03.552 08:54:37 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.552 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:03.552 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:03.552 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:03.552 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:03.552 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:03.552 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:03.552 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:03.552 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:03.552 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:03.553 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:03.553 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:03.553 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:03.553 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:03.553 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:03.553 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:03.553 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:03.553 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:03.553 08:54:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:03.553 08:54:39 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:03.553 08:54:39 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:03.553 08:54:39 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:03.553 08:54:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:03.553 08:54:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:03.553 08:54:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:03.553 08:54:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.553 08:54:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:03.553 08:54:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:03.553 08:54:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.553 08:54:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:03.553 08:54:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:03.553 08:54:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:03.553 08:54:39 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:03.553 No valid GPT data, bailing 00:04:03.553 08:54:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.553 08:54:39 -- scripts/common.sh@394 -- # pt= 00:04:03.553 08:54:39 -- scripts/common.sh@395 -- # return 1 00:04:03.553 08:54:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:03.553 1+0 records in 00:04:03.553 1+0 records out 00:04:03.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00225543 s, 465 MB/s 00:04:03.553 08:54:39 -- spdk/autotest.sh@105 -- # sync 00:04:03.553 08:54:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:03.553 08:54:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:03.553 08:54:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:04.488 08:54:41 -- spdk/autotest.sh@111 -- # uname -s 00:04:04.488 08:54:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:04.488 08:54:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:04.488 08:54:41 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:05.868 Hugepages 00:04:05.868 node hugesize free / total 00:04:05.868 node0 1048576kB 0 / 0 00:04:05.868 node0 2048kB 0 / 0 00:04:05.868 node1 1048576kB 0 / 0 00:04:05.868 node1 2048kB 0 / 0 00:04:05.868 00:04:05.869 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:05.869 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:05.869 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:05.869 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:05.869 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:05.869 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:05.869 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:05.869 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:05.869 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:05.869 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:05.869 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:05.869 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:05.869 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:05.869 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:05.869 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:05.869 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:05.869 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:05.869 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:05.869 08:54:42 -- spdk/autotest.sh@117 -- # uname -s 00:04:05.869 08:54:42 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:05.869 08:54:42 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:05.869 08:54:42 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.249 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:07.249 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:07.249 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:07.249 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:07.249 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:07.249 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:07.249 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:07.249 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:07.249 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:07.249 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:07.249 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:07.249 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:07.249 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:07.249 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:07.249 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:07.249 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:08.187 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:08.187 08:54:45 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:09.126 08:54:46 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:09.126 08:54:46 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:09.126 08:54:46 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:09.126 08:54:46 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:09.126 08:54:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:09.126 08:54:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:09.126 08:54:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:09.126 08:54:46 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:09.126 08:54:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:09.385 08:54:46 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:09.385 08:54:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:04:09.385 08:54:46 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.319 Waiting for block devices as requested 00:04:10.319 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:10.579 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:10.579 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:10.579 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:10.840 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:10.840 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:10.840 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:10.840 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:11.101 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:04:11.101 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:11.101 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:11.360 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:11.360 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:11.360 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:11.618 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:11.618 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:11.618 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:11.878 08:54:48 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:11.878 08:54:48 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:04:11.878 08:54:48 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:11.878 08:54:48 -- common/autotest_common.sh@1485 -- # grep 0000:0b:00.0/nvme/nvme 00:04:11.878 08:54:48 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:11.878 08:54:48 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:04:11.878 08:54:48 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:11.878 08:54:48 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:11.878 08:54:48 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:11.878 08:54:48 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:11.878 08:54:48 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:11.878 08:54:48 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:11.878 08:54:48 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:11.878 08:54:48 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:04:11.878 08:54:48 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:11.878 08:54:48 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:11.878 08:54:48 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:11.878 08:54:48 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:11.878 08:54:48 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:11.878 08:54:48 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:11.878 08:54:48 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:11.878 08:54:48 -- common/autotest_common.sh@1541 -- # continue 00:04:11.878 08:54:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:11.878 08:54:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:11.878 08:54:48 -- common/autotest_common.sh@10 -- # set +x 00:04:11.878 08:54:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:11.878 08:54:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:11.878 08:54:48 -- common/autotest_common.sh@10 -- # set +x 00:04:11.878 08:54:48 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.252 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:13.252 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:13.252 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:13.252 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:13.252 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:13.252 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:13.252 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:13.252 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:13.252 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:13.252 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:13.252 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:13.252 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:13.252 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:13.252 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:13.252 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:13.252 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:14.189 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.189 08:54:51 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:14.189 08:54:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:14.189 08:54:51 -- common/autotest_common.sh@10 -- # set +x 00:04:14.189 08:54:51 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:14.189 08:54:51 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:14.189 08:54:51 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:14.189 08:54:51 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:14.189 08:54:51 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:14.189 08:54:51 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:14.189 08:54:51 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:14.189 08:54:51 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:14.189 08:54:51 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:14.189 08:54:51 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:14.189 08:54:51 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.189 08:54:51 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.189 08:54:51 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:14.189 08:54:51 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:14.189 08:54:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:04:14.189 08:54:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:14.189 08:54:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:04:14.189 08:54:51 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:14.189 08:54:51 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:14.189 08:54:51 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:14.189 08:54:51 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:14.189 08:54:51 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:0b:00.0 00:04:14.189 08:54:51 -- common/autotest_common.sh@1577 -- # [[ -z 0000:0b:00.0 ]] 00:04:14.189 08:54:51 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3198160 00:04:14.189 08:54:51 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.189 08:54:51 -- common/autotest_common.sh@1583 -- # waitforlisten 3198160 00:04:14.189 08:54:51 -- common/autotest_common.sh@833 -- # '[' -z 3198160 ']' 00:04:14.189 08:54:51 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.189 08:54:51 -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:14.189 08:54:51 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.189 08:54:51 -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:14.189 08:54:51 -- common/autotest_common.sh@10 -- # set +x 00:04:14.447 [2024-11-20 08:54:51.240012] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:04:14.447 [2024-11-20 08:54:51.240104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198160 ] 00:04:14.447 [2024-11-20 08:54:51.308011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.447 [2024-11-20 08:54:51.368059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.705 08:54:51 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:14.705 08:54:51 -- common/autotest_common.sh@866 -- # return 0 00:04:14.705 08:54:51 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:14.705 08:54:51 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:14.705 08:54:51 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:04:17.993 nvme0n1 00:04:17.993 08:54:54 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:17.993 [2024-11-20 08:54:54.999839] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:17.993 [2024-11-20 08:54:54.999882] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:17.993 request: 00:04:17.993 { 00:04:17.993 "nvme_ctrlr_name": "nvme0", 00:04:17.993 "password": "test", 00:04:17.993 "method": "bdev_nvme_opal_revert", 00:04:17.993 "req_id": 1 00:04:17.993 } 00:04:17.993 Got JSON-RPC error response 00:04:17.993 response: 00:04:17.993 { 00:04:17.993 "code": -32603, 00:04:17.993 "message": "Internal error" 00:04:17.993 } 00:04:17.993 08:54:55 -- common/autotest_common.sh@1589 -- # true 00:04:17.993 08:54:55 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:17.993 08:54:55 -- common/autotest_common.sh@1593 -- # killprocess 3198160 00:04:17.993 08:54:55 -- common/autotest_common.sh@952 -- # '[' -z 3198160 ']' 00:04:17.993 08:54:55 -- common/autotest_common.sh@956 -- # kill -0 3198160 00:04:17.993 08:54:55 -- common/autotest_common.sh@957 -- # uname 00:04:17.993 08:54:55 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:18.251 08:54:55 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3198160 00:04:18.251 08:54:55 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:18.251 08:54:55 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:18.251 08:54:55 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3198160' 00:04:18.251 killing process with pid 3198160 00:04:18.251 08:54:55 -- common/autotest_common.sh@971 -- # kill 3198160 00:04:18.251 08:54:55 -- common/autotest_common.sh@976 -- # wait 3198160 00:04:20.151 08:54:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:20.151 08:54:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:20.151 08:54:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.151 08:54:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.151 08:54:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:20.151 08:54:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.151 08:54:56 -- common/autotest_common.sh@10 -- # set +x 00:04:20.151 08:54:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:20.151 08:54:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.151 08:54:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.151 08:54:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.151 08:54:56 -- common/autotest_common.sh@10 -- # set +x 00:04:20.151 ************************************ 00:04:20.151 START TEST env 00:04:20.151 ************************************ 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.151 * Looking for test storage... 00:04:20.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:20.151 08:54:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.151 08:54:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.151 08:54:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.151 08:54:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.151 08:54:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.151 08:54:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.151 08:54:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.151 08:54:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.151 08:54:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.151 08:54:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.151 08:54:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.151 08:54:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:20.151 08:54:56 env -- scripts/common.sh@345 -- # : 1 00:04:20.151 08:54:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.151 08:54:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.151 08:54:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:20.151 08:54:56 env -- scripts/common.sh@353 -- # local d=1 00:04:20.151 08:54:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.151 08:54:56 env -- scripts/common.sh@355 -- # echo 1 00:04:20.151 08:54:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.151 08:54:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.151 08:54:56 env -- scripts/common.sh@353 -- # local d=2 00:04:20.151 08:54:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.151 08:54:56 env -- scripts/common.sh@355 -- # echo 2 00:04:20.151 08:54:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.151 08:54:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.151 08:54:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.151 08:54:56 env -- scripts/common.sh@368 -- # return 0 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:20.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.151 --rc genhtml_branch_coverage=1 00:04:20.151 --rc genhtml_function_coverage=1 00:04:20.151 --rc genhtml_legend=1 00:04:20.151 --rc geninfo_all_blocks=1 00:04:20.151 --rc geninfo_unexecuted_blocks=1 00:04:20.151 00:04:20.151 ' 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:20.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.151 --rc genhtml_branch_coverage=1 00:04:20.151 --rc genhtml_function_coverage=1 00:04:20.151 --rc genhtml_legend=1 00:04:20.151 --rc geninfo_all_blocks=1 00:04:20.151 --rc geninfo_unexecuted_blocks=1 00:04:20.151 00:04:20.151 ' 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:20.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.151 --rc genhtml_branch_coverage=1 00:04:20.151 --rc genhtml_function_coverage=1 00:04:20.151 --rc genhtml_legend=1 00:04:20.151 --rc geninfo_all_blocks=1 00:04:20.151 --rc geninfo_unexecuted_blocks=1 00:04:20.151 00:04:20.151 ' 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:20.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.151 --rc genhtml_branch_coverage=1 00:04:20.151 --rc genhtml_function_coverage=1 00:04:20.151 --rc genhtml_legend=1 00:04:20.151 --rc geninfo_all_blocks=1 00:04:20.151 --rc geninfo_unexecuted_blocks=1 00:04:20.151 00:04:20.151 ' 00:04:20.151 08:54:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.151 08:54:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.151 08:54:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.151 ************************************ 00:04:20.151 START TEST env_memory 00:04:20.151 ************************************ 00:04:20.151 08:54:56 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:20.151 00:04:20.151 00:04:20.151 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.151 http://cunit.sourceforge.net/ 00:04:20.151 00:04:20.151 00:04:20.151 Suite: memory 00:04:20.151 Test: alloc and free memory map ...[2024-11-20 08:54:56.996734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.151 passed 00:04:20.151 Test: mem map translation ...[2024-11-20 08:54:57.017261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.151 [2024-11-20 08:54:57.017282] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.151 [2024-11-20 08:54:57.017342] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.151 [2024-11-20 08:54:57.017370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.151 passed 00:04:20.151 Test: mem map registration ...[2024-11-20 08:54:57.061083] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.151 [2024-11-20 08:54:57.061103] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.151 passed 00:04:20.151 Test: mem map adjacent registrations ...passed 00:04:20.151 00:04:20.151 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.151 suites 1 1 n/a 0 0 00:04:20.151 tests 4 4 4 0 0 00:04:20.151 asserts 152 152 152 0 n/a 00:04:20.151 00:04:20.151 Elapsed time = 0.148 seconds 00:04:20.151 00:04:20.151 real 0m0.155s 00:04:20.151 user 0m0.148s 00:04:20.151 sys 0m0.007s 00:04:20.151 08:54:57 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:20.152 08:54:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.152 ************************************ 00:04:20.152 END TEST env_memory 00:04:20.152 ************************************ 00:04:20.152 08:54:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.152 08:54:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.152 08:54:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.152 08:54:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.152 ************************************ 00:04:20.152 START TEST env_vtophys 00:04:20.152 ************************************ 00:04:20.152 08:54:57 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:20.410 EAL: lib.eal log level changed from notice to debug 00:04:20.410 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.410 EAL: Detected lcore 1 as core 1 on socket 0 00:04:20.410 EAL: Detected lcore 2 as core 2 on socket 0 00:04:20.410 EAL: Detected lcore 3 as core 3 on socket 0 00:04:20.410 EAL: Detected lcore 4 as core 4 on socket 0 00:04:20.410 EAL: Detected lcore 5 as core 5 on socket 0 00:04:20.410 EAL: Detected lcore 6 as core 8 on socket 0 00:04:20.410 EAL: Detected lcore 7 as core 9 on socket 0 00:04:20.410 EAL: Detected lcore 8 as core 10 on socket 0 00:04:20.410 EAL: Detected lcore 9 as core 11 on socket 0 00:04:20.410 EAL: Detected lcore 10 as core 12 on socket 0 00:04:20.410 EAL: Detected lcore 11 as core 13 on socket 0 00:04:20.410 EAL: Detected lcore 12 as core 0 on socket 1 00:04:20.410 EAL: Detected lcore 13 as core 1 on socket 1 00:04:20.410 EAL: Detected lcore 14 as core 2 on socket 1 00:04:20.410 EAL: Detected lcore 15 as core 3 on socket 1 00:04:20.410 EAL: Detected lcore 16 as core 4 on socket 1 00:04:20.410 EAL: Detected lcore 17 as core 5 on socket 1 00:04:20.410 EAL: Detected lcore 18 as core 8 on socket 1 00:04:20.410 EAL: Detected lcore 19 as core 9 on socket 1 00:04:20.410 EAL: Detected lcore 20 as core 10 on socket 1 00:04:20.410 EAL: Detected lcore 21 as core 11 on socket 1 00:04:20.410 EAL: Detected lcore 22 as core 12 on socket 1 00:04:20.410 EAL: Detected lcore 23 as core 13 on socket 1 00:04:20.410 EAL: Detected lcore 24 as core 0 on socket 0 00:04:20.410 EAL: Detected lcore 25 as core 1 on socket 0 00:04:20.410 EAL: Detected lcore 26 as core 2 on socket 0 00:04:20.410 EAL: Detected lcore 27 as core 3 on socket 0 00:04:20.410 EAL: Detected lcore 28 as core 4 on socket 0 00:04:20.410 EAL: Detected lcore 29 as core 5 on socket 0 00:04:20.410 EAL: Detected lcore 30 as core 8 on socket 0 00:04:20.410 EAL: Detected lcore 31 as core 9 on socket 0 00:04:20.410 EAL: Detected lcore 32 as core 10 on socket 0 00:04:20.410 EAL: Detected lcore 33 as core 11 on socket 0 00:04:20.410 EAL: Detected lcore 34 as core 12 on socket 0 00:04:20.410 EAL: Detected lcore 35 as core 13 on socket 0 00:04:20.410 EAL: Detected lcore 36 as core 0 on socket 1 00:04:20.410 EAL: Detected lcore 37 as core 1 on socket 1 00:04:20.410 EAL: Detected lcore 38 as core 2 on socket 1 00:04:20.410 EAL: Detected lcore 39 as core 3 on socket 1 00:04:20.410 EAL: Detected lcore 40 as core 4 on socket 1 00:04:20.410 EAL: Detected lcore 41 as core 5 on socket 1 00:04:20.410 EAL: Detected lcore 42 as core 8 on socket 1 00:04:20.410 EAL: Detected lcore 43 as core 9 on socket 1 00:04:20.410 EAL: Detected lcore 44 as core 10 on socket 1 00:04:20.410 EAL: Detected lcore 45 as core 11 on socket 1 00:04:20.410 EAL: Detected lcore 46 as core 12 on socket 1 00:04:20.410 EAL: Detected lcore 47 as core 13 on socket 1 00:04:20.410 EAL: Maximum logical cores by configuration: 128 00:04:20.410 EAL: Detected CPU lcores: 48 00:04:20.410 EAL: Detected NUMA nodes: 2 00:04:20.410 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.411 EAL: Detected shared linkage of DPDK 00:04:20.411 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.411 EAL: Bus pci wants IOVA as 'DC' 00:04:20.411 EAL: Buses did not request a specific IOVA mode. 00:04:20.411 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:20.411 EAL: Selected IOVA mode 'VA' 00:04:20.411 EAL: Probing VFIO support... 00:04:20.411 EAL: IOMMU type 1 (Type 1) is supported 00:04:20.411 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:20.411 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:20.411 EAL: VFIO support initialized 00:04:20.411 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.411 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.411 EAL: Setting up physically contiguous memory... 00:04:20.411 EAL: Setting maximum number of open files to 524288 00:04:20.411 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.411 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:20.411 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.411 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.411 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.411 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.411 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.411 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.411 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.411 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.411 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.411 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.411 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.411 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.411 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.411 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.411 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.411 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.411 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.411 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.411 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.411 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.411 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.411 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.411 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.411 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.411 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.411 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:20.411 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.411 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:20.411 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.411 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.411 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:20.411 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:20.411 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.411 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:20.411 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.411 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.411 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:20.411 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:20.411 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.411 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:20.411 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.411 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.411 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:20.411 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:20.411 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.411 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:20.411 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:20.411 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.411 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:20.411 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:20.411 EAL: Hugepages will be freed exactly as allocated. 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: TSC frequency is ~2700000 KHz 00:04:20.411 EAL: Main lcore 0 is ready (tid=7fb903abea00;cpuset=[0]) 00:04:20.411 EAL: Trying to obtain current memory policy. 00:04:20.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.411 EAL: Restoring previous memory policy: 0 00:04:20.411 EAL: request: mp_malloc_sync 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.411 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.411 00:04:20.411 00:04:20.411 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.411 http://cunit.sourceforge.net/ 00:04:20.411 00:04:20.411 00:04:20.411 Suite: components_suite 00:04:20.411 Test: vtophys_malloc_test ...passed 00:04:20.411 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:20.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.411 EAL: Restoring previous memory policy: 4 00:04:20.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.411 EAL: request: mp_malloc_sync 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: Heap on socket 0 was expanded by 4MB 00:04:20.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.411 EAL: request: mp_malloc_sync 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: Heap on socket 0 was shrunk by 4MB 00:04:20.411 EAL: Trying to obtain current memory policy. 00:04:20.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.411 EAL: Restoring previous memory policy: 4 00:04:20.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.411 EAL: request: mp_malloc_sync 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: Heap on socket 0 was expanded by 6MB 00:04:20.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.411 EAL: request: mp_malloc_sync 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: Heap on socket 0 was shrunk by 6MB 00:04:20.411 EAL: Trying to obtain current memory policy. 00:04:20.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.411 EAL: Restoring previous memory policy: 4 00:04:20.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.411 EAL: request: mp_malloc_sync 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: Heap on socket 0 was expanded by 10MB 00:04:20.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.411 EAL: request: mp_malloc_sync 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: Heap on socket 0 was shrunk by 10MB 00:04:20.411 EAL: Trying to obtain current memory policy. 00:04:20.411 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.411 EAL: Restoring previous memory policy: 4 00:04:20.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.411 EAL: request: mp_malloc_sync 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: Heap on socket 0 was expanded by 18MB 00:04:20.411 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.411 EAL: request: mp_malloc_sync 00:04:20.411 EAL: No shared files mode enabled, IPC is disabled 00:04:20.411 EAL: Heap on socket 0 was shrunk by 18MB 00:04:20.411 EAL: Trying to obtain current memory policy. 00:04:20.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.412 EAL: Restoring previous memory policy: 4 00:04:20.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.412 EAL: request: mp_malloc_sync 00:04:20.412 EAL: No shared files mode enabled, IPC is disabled 00:04:20.412 EAL: Heap on socket 0 was expanded by 34MB 00:04:20.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.412 EAL: request: mp_malloc_sync 00:04:20.412 EAL: No shared files mode enabled, IPC is disabled 00:04:20.412 EAL: Heap on socket 0 was shrunk by 34MB 00:04:20.412 EAL: Trying to obtain current memory policy. 00:04:20.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.412 EAL: Restoring previous memory policy: 4 00:04:20.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.412 EAL: request: mp_malloc_sync 00:04:20.412 EAL: No shared files mode enabled, IPC is disabled 00:04:20.412 EAL: Heap on socket 0 was expanded by 66MB 00:04:20.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.412 EAL: request: mp_malloc_sync 00:04:20.412 EAL: No shared files mode enabled, IPC is disabled 00:04:20.412 EAL: Heap on socket 0 was shrunk by 66MB 00:04:20.412 EAL: Trying to obtain current memory policy. 00:04:20.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.412 EAL: Restoring previous memory policy: 4 00:04:20.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.412 EAL: request: mp_malloc_sync 00:04:20.412 EAL: No shared files mode enabled, IPC is disabled 00:04:20.412 EAL: Heap on socket 0 was expanded by 130MB 00:04:20.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.412 EAL: request: mp_malloc_sync 00:04:20.412 EAL: No shared files mode enabled, IPC is disabled 00:04:20.412 EAL: Heap on socket 0 was shrunk by 130MB 00:04:20.412 EAL: Trying to obtain current memory policy. 00:04:20.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.671 EAL: Restoring previous memory policy: 4 00:04:20.671 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.671 EAL: request: mp_malloc_sync 00:04:20.671 EAL: No shared files mode enabled, IPC is disabled 00:04:20.671 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.671 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.671 EAL: request: mp_malloc_sync 00:04:20.671 EAL: No shared files mode enabled, IPC is disabled 00:04:20.671 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.671 EAL: Trying to obtain current memory policy. 00:04:20.671 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.929 EAL: Restoring previous memory policy: 4 00:04:20.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.929 EAL: request: mp_malloc_sync 00:04:20.929 EAL: No shared files mode enabled, IPC is disabled 00:04:20.929 EAL: Heap on socket 0 was expanded by 514MB 00:04:20.929 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.929 EAL: request: mp_malloc_sync 00:04:20.929 EAL: No shared files mode enabled, IPC is disabled 00:04:20.929 EAL: Heap on socket 0 was shrunk by 514MB 00:04:20.929 EAL: Trying to obtain current memory policy. 00:04:20.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.186 EAL: Restoring previous memory policy: 4 00:04:21.186 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.186 EAL: request: mp_malloc_sync 00:04:21.186 EAL: No shared files mode enabled, IPC is disabled 00:04:21.186 EAL: Heap on socket 0 was expanded by 1026MB 00:04:21.443 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.703 EAL: request: mp_malloc_sync 00:04:21.703 EAL: No shared files mode enabled, IPC is disabled 00:04:21.703 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.703 passed 00:04:21.703 00:04:21.703 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.703 suites 1 1 n/a 0 0 00:04:21.703 tests 2 2 2 0 0 00:04:21.703 asserts 497 497 497 0 n/a 00:04:21.703 00:04:21.703 Elapsed time = 1.352 seconds 00:04:21.703 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.703 EAL: request: mp_malloc_sync 00:04:21.703 EAL: No shared files mode enabled, IPC is disabled 00:04:21.703 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.703 EAL: No shared files mode enabled, IPC is disabled 00:04:21.703 EAL: No shared files mode enabled, IPC is disabled 00:04:21.703 EAL: No shared files mode enabled, IPC is disabled 00:04:21.703 00:04:21.703 real 0m1.468s 00:04:21.703 user 0m0.868s 00:04:21.703 sys 0m0.570s 00:04:21.703 08:54:58 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.703 08:54:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.703 ************************************ 00:04:21.703 END TEST env_vtophys 00:04:21.703 ************************************ 00:04:21.703 08:54:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.703 08:54:58 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.703 08:54:58 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.703 08:54:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.703 ************************************ 00:04:21.703 START TEST env_pci 00:04:21.703 ************************************ 00:04:21.703 08:54:58 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:21.703 00:04:21.703 00:04:21.703 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.703 http://cunit.sourceforge.net/ 00:04:21.703 00:04:21.703 00:04:21.703 Suite: pci 00:04:21.704 Test: pci_hook ...[2024-11-20 08:54:58.696980] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3199060 has claimed it 00:04:21.704 EAL: Cannot find device (10000:00:01.0) 00:04:21.704 EAL: Failed to attach device on primary process 00:04:21.704 passed 00:04:21.704 00:04:21.704 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.704 suites 1 1 n/a 0 0 00:04:21.704 tests 1 1 1 0 0 00:04:21.704 asserts 25 25 25 0 n/a 00:04:21.704 00:04:21.704 Elapsed time = 0.022 seconds 00:04:21.704 00:04:21.704 real 0m0.036s 00:04:21.704 user 0m0.011s 00:04:21.704 sys 0m0.025s 00:04:21.704 08:54:58 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.704 08:54:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.704 ************************************ 00:04:21.704 END TEST env_pci 00:04:21.704 ************************************ 00:04:21.964 08:54:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.964 08:54:58 env -- env/env.sh@15 -- # uname 00:04:21.964 08:54:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.964 08:54:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.964 08:54:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.964 08:54:58 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:21.964 08:54:58 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.964 08:54:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.964 ************************************ 00:04:21.964 START TEST env_dpdk_post_init 00:04:21.964 ************************************ 00:04:21.964 08:54:58 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.964 EAL: Detected CPU lcores: 48 00:04:21.964 EAL: Detected NUMA nodes: 2 00:04:21.964 EAL: Detected shared linkage of DPDK 00:04:21.964 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.964 EAL: Selected IOVA mode 'VA' 00:04:21.964 EAL: VFIO support initialized 00:04:21.964 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.964 EAL: Using IOMMU type 1 (Type 1) 00:04:21.964 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:21.964 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:21.964 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:21.964 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:21.964 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:21.964 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:21.964 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:21.964 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:23.001 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:04:23.001 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:23.001 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:23.001 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:23.001 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:23.001 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:23.001 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:23.001 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:23.001 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:26.277 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:04:26.277 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:04:26.277 Starting DPDK initialization... 00:04:26.277 Starting SPDK post initialization... 00:04:26.277 SPDK NVMe probe 00:04:26.277 Attaching to 0000:0b:00.0 00:04:26.277 Attached to 0000:0b:00.0 00:04:26.277 Cleaning up... 00:04:26.277 00:04:26.277 real 0m4.368s 00:04:26.277 user 0m3.011s 00:04:26.277 sys 0m0.418s 00:04:26.277 08:55:03 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.277 08:55:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.277 ************************************ 00:04:26.277 END TEST env_dpdk_post_init 00:04:26.277 ************************************ 00:04:26.277 08:55:03 env -- env/env.sh@26 -- # uname 00:04:26.277 08:55:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.277 08:55:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.277 08:55:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.277 08:55:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.277 08:55:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.277 ************************************ 00:04:26.277 START TEST env_mem_callbacks 00:04:26.277 ************************************ 00:04:26.277 08:55:03 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.277 EAL: Detected CPU lcores: 48 00:04:26.277 EAL: Detected NUMA nodes: 2 00:04:26.277 EAL: Detected shared linkage of DPDK 00:04:26.277 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.277 EAL: Selected IOVA mode 'VA' 00:04:26.277 EAL: VFIO support initialized 00:04:26.277 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.277 00:04:26.277 00:04:26.277 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.277 http://cunit.sourceforge.net/ 00:04:26.277 00:04:26.277 00:04:26.277 Suite: memory 00:04:26.277 Test: test ... 00:04:26.277 register 0x200000200000 2097152 00:04:26.277 malloc 3145728 00:04:26.277 register 0x200000400000 4194304 00:04:26.277 buf 0x200000500000 len 3145728 PASSED 00:04:26.277 malloc 64 00:04:26.277 buf 0x2000004fff40 len 64 PASSED 00:04:26.277 malloc 4194304 00:04:26.277 register 0x200000800000 6291456 00:04:26.277 buf 0x200000a00000 len 4194304 PASSED 00:04:26.277 free 0x200000500000 3145728 00:04:26.277 free 0x2000004fff40 64 00:04:26.277 unregister 0x200000400000 4194304 PASSED 00:04:26.277 free 0x200000a00000 4194304 00:04:26.277 unregister 0x200000800000 6291456 PASSED 00:04:26.277 malloc 8388608 00:04:26.277 register 0x200000400000 10485760 00:04:26.277 buf 0x200000600000 len 8388608 PASSED 00:04:26.277 free 0x200000600000 8388608 00:04:26.277 unregister 0x200000400000 10485760 PASSED 00:04:26.277 passed 00:04:26.277 00:04:26.277 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.277 suites 1 1 n/a 0 0 00:04:26.277 tests 1 1 1 0 0 00:04:26.277 asserts 15 15 15 0 n/a 00:04:26.277 00:04:26.277 Elapsed time = 0.005 seconds 00:04:26.277 00:04:26.277 real 0m0.048s 00:04:26.277 user 0m0.014s 00:04:26.277 sys 0m0.034s 00:04:26.277 08:55:03 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.277 08:55:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:26.277 ************************************ 00:04:26.277 END TEST env_mem_callbacks 00:04:26.277 ************************************ 00:04:26.277 00:04:26.277 real 0m6.471s 00:04:26.277 user 0m4.255s 00:04:26.277 sys 0m1.270s 00:04:26.277 08:55:03 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.277 08:55:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.277 ************************************ 00:04:26.277 END TEST env 00:04:26.277 ************************************ 00:04:26.277 08:55:03 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.277 08:55:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.277 08:55:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.277 08:55:03 -- common/autotest_common.sh@10 -- # set +x 00:04:26.535 ************************************ 00:04:26.535 START TEST rpc 00:04:26.535 ************************************ 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:26.535 * Looking for test storage... 00:04:26.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:26.535 08:55:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.535 08:55:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.535 08:55:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.535 08:55:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.535 08:55:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.535 08:55:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.535 08:55:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.535 08:55:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.535 08:55:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.535 08:55:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.535 08:55:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.535 08:55:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.535 08:55:03 rpc -- scripts/common.sh@345 -- # : 1 00:04:26.535 08:55:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.535 08:55:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.535 08:55:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.535 08:55:03 rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.535 08:55:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.535 08:55:03 rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.535 08:55:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.535 08:55:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.535 08:55:03 rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.535 08:55:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.535 08:55:03 rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.535 08:55:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.535 08:55:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.535 08:55:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.535 08:55:03 rpc -- scripts/common.sh@368 -- # return 0 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:26.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.535 --rc genhtml_branch_coverage=1 00:04:26.535 --rc genhtml_function_coverage=1 00:04:26.535 --rc genhtml_legend=1 00:04:26.535 --rc geninfo_all_blocks=1 00:04:26.535 --rc geninfo_unexecuted_blocks=1 00:04:26.535 00:04:26.535 ' 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:26.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.535 --rc genhtml_branch_coverage=1 00:04:26.535 --rc genhtml_function_coverage=1 00:04:26.535 --rc genhtml_legend=1 00:04:26.535 --rc geninfo_all_blocks=1 00:04:26.535 --rc geninfo_unexecuted_blocks=1 00:04:26.535 00:04:26.535 ' 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:26.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.535 --rc genhtml_branch_coverage=1 00:04:26.535 --rc genhtml_function_coverage=1 00:04:26.535 --rc genhtml_legend=1 00:04:26.535 --rc geninfo_all_blocks=1 00:04:26.535 --rc geninfo_unexecuted_blocks=1 00:04:26.535 00:04:26.535 ' 00:04:26.535 08:55:03 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:26.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.535 --rc genhtml_branch_coverage=1 00:04:26.535 --rc genhtml_function_coverage=1 00:04:26.535 --rc genhtml_legend=1 00:04:26.535 --rc geninfo_all_blocks=1 00:04:26.535 --rc geninfo_unexecuted_blocks=1 00:04:26.535 00:04:26.535 ' 00:04:26.535 08:55:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3199834 00:04:26.535 08:55:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:26.536 08:55:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.536 08:55:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3199834 00:04:26.536 08:55:03 rpc -- common/autotest_common.sh@833 -- # '[' -z 3199834 ']' 00:04:26.536 08:55:03 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.536 08:55:03 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:26.536 08:55:03 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.536 08:55:03 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:26.536 08:55:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.536 [2024-11-20 08:55:03.506899] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:04:26.536 [2024-11-20 08:55:03.506993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199834 ] 00:04:26.794 [2024-11-20 08:55:03.574699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.794 [2024-11-20 08:55:03.631293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:26.794 [2024-11-20 08:55:03.631353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3199834' to capture a snapshot of events at runtime. 00:04:26.794 [2024-11-20 08:55:03.631381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:26.794 [2024-11-20 08:55:03.631392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:26.794 [2024-11-20 08:55:03.631401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3199834 for offline analysis/debug. 00:04:26.794 [2024-11-20 08:55:03.631946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.053 08:55:03 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:27.053 08:55:03 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:27.053 08:55:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.053 08:55:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:27.053 08:55:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:27.053 08:55:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:27.053 08:55:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.053 08:55:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.053 08:55:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.053 ************************************ 00:04:27.053 START TEST rpc_integrity 00:04:27.053 ************************************ 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:27.053 08:55:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.053 08:55:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.053 08:55:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.053 08:55:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.053 08:55:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.053 08:55:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:27.053 08:55:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.053 08:55:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.053 08:55:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.053 { 00:04:27.053 "name": "Malloc0", 00:04:27.053 "aliases": [ 00:04:27.053 "14388eb2-6ac5-4bed-a779-434fccf87aae" 00:04:27.053 ], 00:04:27.053 "product_name": "Malloc disk", 00:04:27.053 "block_size": 512, 00:04:27.053 "num_blocks": 16384, 00:04:27.053 "uuid": "14388eb2-6ac5-4bed-a779-434fccf87aae", 00:04:27.053 "assigned_rate_limits": { 00:04:27.053 "rw_ios_per_sec": 0, 00:04:27.053 "rw_mbytes_per_sec": 0, 00:04:27.053 "r_mbytes_per_sec": 0, 00:04:27.053 "w_mbytes_per_sec": 0 00:04:27.053 }, 00:04:27.053 "claimed": false, 00:04:27.053 "zoned": false, 00:04:27.053 "supported_io_types": { 00:04:27.053 "read": true, 00:04:27.053 "write": true, 00:04:27.053 "unmap": true, 00:04:27.053 "flush": true, 00:04:27.053 "reset": true, 00:04:27.053 "nvme_admin": false, 00:04:27.053 "nvme_io": false, 00:04:27.053 "nvme_io_md": false, 00:04:27.053 "write_zeroes": true, 00:04:27.053 "zcopy": true, 00:04:27.053 "get_zone_info": false, 00:04:27.053 "zone_management": false, 00:04:27.053 "zone_append": false, 00:04:27.053 "compare": false, 00:04:27.053 "compare_and_write": false, 00:04:27.053 "abort": true, 00:04:27.053 "seek_hole": false, 00:04:27.053 "seek_data": false, 00:04:27.053 "copy": true, 00:04:27.053 "nvme_iov_md": false 00:04:27.053 }, 00:04:27.053 "memory_domains": [ 00:04:27.053 { 00:04:27.053 "dma_device_id": "system", 00:04:27.053 "dma_device_type": 1 00:04:27.053 }, 00:04:27.053 { 00:04:27.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.053 "dma_device_type": 2 00:04:27.053 } 00:04:27.053 ], 00:04:27.053 "driver_specific": {} 00:04:27.053 } 00:04:27.053 ]' 00:04:27.053 08:55:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.053 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.053 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:27.053 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.053 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.053 [2024-11-20 08:55:04.013916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:27.053 [2024-11-20 08:55:04.013965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.053 [2024-11-20 08:55:04.013988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21ccd20 00:04:27.053 [2024-11-20 08:55:04.014001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.053 [2024-11-20 08:55:04.015352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.053 [2024-11-20 08:55:04.015376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.053 Passthru0 00:04:27.053 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.053 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.053 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.053 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.053 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.053 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.053 { 00:04:27.053 "name": "Malloc0", 00:04:27.053 "aliases": [ 00:04:27.053 "14388eb2-6ac5-4bed-a779-434fccf87aae" 00:04:27.053 ], 00:04:27.053 "product_name": "Malloc disk", 00:04:27.053 "block_size": 512, 00:04:27.053 "num_blocks": 16384, 00:04:27.053 "uuid": "14388eb2-6ac5-4bed-a779-434fccf87aae", 00:04:27.053 "assigned_rate_limits": { 00:04:27.053 "rw_ios_per_sec": 0, 00:04:27.053 "rw_mbytes_per_sec": 0, 00:04:27.053 "r_mbytes_per_sec": 0, 00:04:27.053 "w_mbytes_per_sec": 0 00:04:27.053 }, 00:04:27.053 "claimed": true, 00:04:27.053 "claim_type": "exclusive_write", 00:04:27.053 "zoned": false, 00:04:27.053 "supported_io_types": { 00:04:27.053 "read": true, 00:04:27.053 "write": true, 00:04:27.053 "unmap": true, 00:04:27.053 "flush": true, 00:04:27.053 "reset": true, 00:04:27.053 "nvme_admin": false, 00:04:27.053 "nvme_io": false, 00:04:27.053 "nvme_io_md": false, 00:04:27.053 "write_zeroes": true, 00:04:27.053 "zcopy": true, 00:04:27.053 "get_zone_info": false, 00:04:27.053 "zone_management": false, 00:04:27.053 "zone_append": false, 00:04:27.053 "compare": false, 00:04:27.053 "compare_and_write": false, 00:04:27.053 "abort": true, 00:04:27.053 "seek_hole": false, 00:04:27.053 "seek_data": false, 00:04:27.053 "copy": true, 00:04:27.053 "nvme_iov_md": false 00:04:27.053 }, 00:04:27.053 "memory_domains": [ 00:04:27.053 { 00:04:27.053 "dma_device_id": "system", 00:04:27.053 "dma_device_type": 1 00:04:27.053 }, 00:04:27.053 { 00:04:27.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.053 "dma_device_type": 2 00:04:27.053 } 00:04:27.053 ], 00:04:27.053 "driver_specific": {} 00:04:27.053 }, 00:04:27.053 { 00:04:27.053 "name": "Passthru0", 00:04:27.053 "aliases": [ 00:04:27.053 "ec374018-794a-5ef0-ab29-c91f918c7ff9" 00:04:27.053 ], 00:04:27.053 "product_name": "passthru", 00:04:27.053 "block_size": 512, 00:04:27.053 "num_blocks": 16384, 00:04:27.053 "uuid": "ec374018-794a-5ef0-ab29-c91f918c7ff9", 00:04:27.053 "assigned_rate_limits": { 00:04:27.053 "rw_ios_per_sec": 0, 00:04:27.053 "rw_mbytes_per_sec": 0, 00:04:27.053 "r_mbytes_per_sec": 0, 00:04:27.053 "w_mbytes_per_sec": 0 00:04:27.053 }, 00:04:27.053 "claimed": false, 00:04:27.053 "zoned": false, 00:04:27.053 "supported_io_types": { 00:04:27.053 "read": true, 00:04:27.053 "write": true, 00:04:27.053 "unmap": true, 00:04:27.053 "flush": true, 00:04:27.053 "reset": true, 00:04:27.053 "nvme_admin": false, 00:04:27.053 "nvme_io": false, 00:04:27.053 "nvme_io_md": false, 00:04:27.053 "write_zeroes": true, 00:04:27.053 "zcopy": true, 00:04:27.053 "get_zone_info": false, 00:04:27.053 "zone_management": false, 00:04:27.053 "zone_append": false, 00:04:27.053 "compare": false, 00:04:27.053 "compare_and_write": false, 00:04:27.053 "abort": true, 00:04:27.053 "seek_hole": false, 00:04:27.053 "seek_data": false, 00:04:27.053 "copy": true, 00:04:27.053 "nvme_iov_md": false 00:04:27.053 }, 00:04:27.053 "memory_domains": [ 00:04:27.053 { 00:04:27.053 "dma_device_id": "system", 00:04:27.054 "dma_device_type": 1 00:04:27.054 }, 00:04:27.054 { 00:04:27.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.054 "dma_device_type": 2 00:04:27.054 } 00:04:27.054 ], 00:04:27.054 "driver_specific": { 00:04:27.054 "passthru": { 00:04:27.054 "name": "Passthru0", 00:04:27.054 "base_bdev_name": "Malloc0" 00:04:27.054 } 00:04:27.054 } 00:04:27.054 } 00:04:27.054 ]' 00:04:27.054 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.054 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.054 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.054 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.054 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.054 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.054 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:27.054 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.054 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.320 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.320 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.320 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.320 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.320 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.320 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.320 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.320 08:55:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.320 00:04:27.320 real 0m0.211s 00:04:27.320 user 0m0.136s 00:04:27.320 sys 0m0.020s 00:04:27.320 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.320 08:55:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.320 ************************************ 00:04:27.320 END TEST rpc_integrity 00:04:27.320 ************************************ 00:04:27.320 08:55:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:27.320 08:55:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.320 08:55:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.320 08:55:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.320 ************************************ 00:04:27.320 START TEST rpc_plugins 00:04:27.320 ************************************ 00:04:27.320 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:27.320 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:27.320 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.320 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.320 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.320 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:27.320 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:27.320 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.320 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.320 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.320 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:27.320 { 00:04:27.320 "name": "Malloc1", 00:04:27.321 "aliases": [ 00:04:27.321 "b5d6683a-bb2d-4f60-b39c-fab097b49c13" 00:04:27.321 ], 00:04:27.321 "product_name": "Malloc disk", 00:04:27.321 "block_size": 4096, 00:04:27.321 "num_blocks": 256, 00:04:27.321 "uuid": "b5d6683a-bb2d-4f60-b39c-fab097b49c13", 00:04:27.321 "assigned_rate_limits": { 00:04:27.321 "rw_ios_per_sec": 0, 00:04:27.321 "rw_mbytes_per_sec": 0, 00:04:27.321 "r_mbytes_per_sec": 0, 00:04:27.321 "w_mbytes_per_sec": 0 00:04:27.321 }, 00:04:27.321 "claimed": false, 00:04:27.321 "zoned": false, 00:04:27.321 "supported_io_types": { 00:04:27.321 "read": true, 00:04:27.321 "write": true, 00:04:27.321 "unmap": true, 00:04:27.321 "flush": true, 00:04:27.321 "reset": true, 00:04:27.321 "nvme_admin": false, 00:04:27.321 "nvme_io": false, 00:04:27.321 "nvme_io_md": false, 00:04:27.321 "write_zeroes": true, 00:04:27.321 "zcopy": true, 00:04:27.321 "get_zone_info": false, 00:04:27.321 "zone_management": false, 00:04:27.321 "zone_append": false, 00:04:27.321 "compare": false, 00:04:27.321 "compare_and_write": false, 00:04:27.321 "abort": true, 00:04:27.321 "seek_hole": false, 00:04:27.321 "seek_data": false, 00:04:27.321 "copy": true, 00:04:27.321 "nvme_iov_md": false 00:04:27.321 }, 00:04:27.321 "memory_domains": [ 00:04:27.321 { 00:04:27.321 "dma_device_id": "system", 00:04:27.321 "dma_device_type": 1 00:04:27.321 }, 00:04:27.321 { 00:04:27.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.321 "dma_device_type": 2 00:04:27.321 } 00:04:27.321 ], 00:04:27.321 "driver_specific": {} 00:04:27.321 } 00:04:27.321 ]' 00:04:27.321 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:27.321 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:27.321 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:27.321 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.321 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.321 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.321 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:27.321 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.321 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.321 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.321 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:27.321 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:27.321 08:55:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:27.321 00:04:27.321 real 0m0.109s 00:04:27.321 user 0m0.074s 00:04:27.321 sys 0m0.005s 00:04:27.321 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.321 08:55:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:27.321 ************************************ 00:04:27.321 END TEST rpc_plugins 00:04:27.321 ************************************ 00:04:27.321 08:55:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:27.321 08:55:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.321 08:55:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.321 08:55:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.321 ************************************ 00:04:27.321 START TEST rpc_trace_cmd_test 00:04:27.321 ************************************ 00:04:27.321 08:55:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:27.321 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:27.321 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:27.321 08:55:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.321 08:55:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.578 08:55:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.578 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:27.578 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3199834", 00:04:27.578 "tpoint_group_mask": "0x8", 00:04:27.578 "iscsi_conn": { 00:04:27.578 "mask": "0x2", 00:04:27.578 "tpoint_mask": "0x0" 00:04:27.578 }, 00:04:27.578 "scsi": { 00:04:27.578 "mask": "0x4", 00:04:27.578 "tpoint_mask": "0x0" 00:04:27.578 }, 00:04:27.578 "bdev": { 00:04:27.578 "mask": "0x8", 00:04:27.578 "tpoint_mask": "0xffffffffffffffff" 00:04:27.578 }, 00:04:27.578 "nvmf_rdma": { 00:04:27.578 "mask": "0x10", 00:04:27.578 "tpoint_mask": "0x0" 00:04:27.578 }, 00:04:27.578 "nvmf_tcp": { 00:04:27.578 "mask": "0x20", 00:04:27.578 "tpoint_mask": "0x0" 00:04:27.578 }, 00:04:27.578 "ftl": { 00:04:27.578 "mask": "0x40", 00:04:27.578 "tpoint_mask": "0x0" 00:04:27.578 }, 00:04:27.578 "blobfs": { 00:04:27.578 "mask": "0x80", 00:04:27.578 "tpoint_mask": "0x0" 00:04:27.578 }, 00:04:27.578 "dsa": { 00:04:27.578 "mask": "0x200", 00:04:27.578 "tpoint_mask": "0x0" 00:04:27.578 }, 00:04:27.578 "thread": { 00:04:27.578 "mask": "0x400", 00:04:27.578 "tpoint_mask": "0x0" 00:04:27.578 }, 00:04:27.578 "nvme_pcie": { 00:04:27.578 "mask": "0x800", 00:04:27.578 "tpoint_mask": "0x0" 00:04:27.578 }, 00:04:27.578 "iaa": { 00:04:27.579 "mask": "0x1000", 00:04:27.579 "tpoint_mask": "0x0" 00:04:27.579 }, 00:04:27.579 "nvme_tcp": { 00:04:27.579 "mask": "0x2000", 00:04:27.579 "tpoint_mask": "0x0" 00:04:27.579 }, 00:04:27.579 "bdev_nvme": { 00:04:27.579 "mask": "0x4000", 00:04:27.579 "tpoint_mask": "0x0" 00:04:27.579 }, 00:04:27.579 "sock": { 00:04:27.579 "mask": "0x8000", 00:04:27.579 "tpoint_mask": "0x0" 00:04:27.579 }, 00:04:27.579 "blob": { 00:04:27.579 "mask": "0x10000", 00:04:27.579 "tpoint_mask": "0x0" 00:04:27.579 }, 00:04:27.579 "bdev_raid": { 00:04:27.579 "mask": "0x20000", 00:04:27.579 "tpoint_mask": "0x0" 00:04:27.579 }, 00:04:27.579 "scheduler": { 00:04:27.579 "mask": "0x40000", 00:04:27.579 "tpoint_mask": "0x0" 00:04:27.579 } 00:04:27.579 }' 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:27.579 00:04:27.579 real 0m0.200s 00:04:27.579 user 0m0.175s 00:04:27.579 sys 0m0.017s 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.579 08:55:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:27.579 ************************************ 00:04:27.579 END TEST rpc_trace_cmd_test 00:04:27.579 ************************************ 00:04:27.579 08:55:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:27.579 08:55:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:27.579 08:55:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:27.579 08:55:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.579 08:55:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.579 08:55:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.579 ************************************ 00:04:27.579 START TEST rpc_daemon_integrity 00:04:27.579 ************************************ 00:04:27.579 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:27.579 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:27.579 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.579 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.579 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.579 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:27.579 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:27.837 { 00:04:27.837 "name": "Malloc2", 00:04:27.837 "aliases": [ 00:04:27.837 "0100b809-97ee-43e7-bc82-199985e9b8e4" 00:04:27.837 ], 00:04:27.837 "product_name": "Malloc disk", 00:04:27.837 "block_size": 512, 00:04:27.837 "num_blocks": 16384, 00:04:27.837 "uuid": "0100b809-97ee-43e7-bc82-199985e9b8e4", 00:04:27.837 "assigned_rate_limits": { 00:04:27.837 "rw_ios_per_sec": 0, 00:04:27.837 "rw_mbytes_per_sec": 0, 00:04:27.837 "r_mbytes_per_sec": 0, 00:04:27.837 "w_mbytes_per_sec": 0 00:04:27.837 }, 00:04:27.837 "claimed": false, 00:04:27.837 "zoned": false, 00:04:27.837 "supported_io_types": { 00:04:27.837 "read": true, 00:04:27.837 "write": true, 00:04:27.837 "unmap": true, 00:04:27.837 "flush": true, 00:04:27.837 "reset": true, 00:04:27.837 "nvme_admin": false, 00:04:27.837 "nvme_io": false, 00:04:27.837 "nvme_io_md": false, 00:04:27.837 "write_zeroes": true, 00:04:27.837 "zcopy": true, 00:04:27.837 "get_zone_info": false, 00:04:27.837 "zone_management": false, 00:04:27.837 "zone_append": false, 00:04:27.837 "compare": false, 00:04:27.837 "compare_and_write": false, 00:04:27.837 "abort": true, 00:04:27.837 "seek_hole": false, 00:04:27.837 "seek_data": false, 00:04:27.837 "copy": true, 00:04:27.837 "nvme_iov_md": false 00:04:27.837 }, 00:04:27.837 "memory_domains": [ 00:04:27.837 { 00:04:27.837 "dma_device_id": "system", 00:04:27.837 "dma_device_type": 1 00:04:27.837 }, 00:04:27.837 { 00:04:27.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.837 "dma_device_type": 2 00:04:27.837 } 00:04:27.837 ], 00:04:27.837 "driver_specific": {} 00:04:27.837 } 00:04:27.837 ]' 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.837 [2024-11-20 08:55:04.680471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:27.837 [2024-11-20 08:55:04.680523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.837 [2024-11-20 08:55:04.680547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2089f10 00:04:27.837 [2024-11-20 08:55:04.680561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.837 [2024-11-20 08:55:04.681784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.837 [2024-11-20 08:55:04.681811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.837 Passthru0 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.837 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.837 { 00:04:27.837 "name": "Malloc2", 00:04:27.837 "aliases": [ 00:04:27.837 "0100b809-97ee-43e7-bc82-199985e9b8e4" 00:04:27.837 ], 00:04:27.837 "product_name": "Malloc disk", 00:04:27.837 "block_size": 512, 00:04:27.837 "num_blocks": 16384, 00:04:27.837 "uuid": "0100b809-97ee-43e7-bc82-199985e9b8e4", 00:04:27.837 "assigned_rate_limits": { 00:04:27.837 "rw_ios_per_sec": 0, 00:04:27.837 "rw_mbytes_per_sec": 0, 00:04:27.837 "r_mbytes_per_sec": 0, 00:04:27.837 "w_mbytes_per_sec": 0 00:04:27.837 }, 00:04:27.837 "claimed": true, 00:04:27.837 "claim_type": "exclusive_write", 00:04:27.838 "zoned": false, 00:04:27.838 "supported_io_types": { 00:04:27.838 "read": true, 00:04:27.838 "write": true, 00:04:27.838 "unmap": true, 00:04:27.838 "flush": true, 00:04:27.838 "reset": true, 00:04:27.838 "nvme_admin": false, 00:04:27.838 "nvme_io": false, 00:04:27.838 "nvme_io_md": false, 00:04:27.838 "write_zeroes": true, 00:04:27.838 "zcopy": true, 00:04:27.838 "get_zone_info": false, 00:04:27.838 "zone_management": false, 00:04:27.838 "zone_append": false, 00:04:27.838 "compare": false, 00:04:27.838 "compare_and_write": false, 00:04:27.838 "abort": true, 00:04:27.838 "seek_hole": false, 00:04:27.838 "seek_data": false, 00:04:27.838 "copy": true, 00:04:27.838 "nvme_iov_md": false 00:04:27.838 }, 00:04:27.838 "memory_domains": [ 00:04:27.838 { 00:04:27.838 "dma_device_id": "system", 00:04:27.838 "dma_device_type": 1 00:04:27.838 }, 00:04:27.838 { 00:04:27.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.838 "dma_device_type": 2 00:04:27.838 } 00:04:27.838 ], 00:04:27.838 "driver_specific": {} 00:04:27.838 }, 00:04:27.838 { 00:04:27.838 "name": "Passthru0", 00:04:27.838 "aliases": [ 00:04:27.838 "f0b5ce12-3741-53aa-acbd-c8675a9399b3" 00:04:27.838 ], 00:04:27.838 "product_name": "passthru", 00:04:27.838 "block_size": 512, 00:04:27.838 "num_blocks": 16384, 00:04:27.838 "uuid": "f0b5ce12-3741-53aa-acbd-c8675a9399b3", 00:04:27.838 "assigned_rate_limits": { 00:04:27.838 "rw_ios_per_sec": 0, 00:04:27.838 "rw_mbytes_per_sec": 0, 00:04:27.838 "r_mbytes_per_sec": 0, 00:04:27.838 "w_mbytes_per_sec": 0 00:04:27.838 }, 00:04:27.838 "claimed": false, 00:04:27.838 "zoned": false, 00:04:27.838 "supported_io_types": { 00:04:27.838 "read": true, 00:04:27.838 "write": true, 00:04:27.838 "unmap": true, 00:04:27.838 "flush": true, 00:04:27.838 "reset": true, 00:04:27.838 "nvme_admin": false, 00:04:27.838 "nvme_io": false, 00:04:27.838 "nvme_io_md": false, 00:04:27.838 "write_zeroes": true, 00:04:27.838 "zcopy": true, 00:04:27.838 "get_zone_info": false, 00:04:27.838 "zone_management": false, 00:04:27.838 "zone_append": false, 00:04:27.838 "compare": false, 00:04:27.838 "compare_and_write": false, 00:04:27.838 "abort": true, 00:04:27.838 "seek_hole": false, 00:04:27.838 "seek_data": false, 00:04:27.838 "copy": true, 00:04:27.838 "nvme_iov_md": false 00:04:27.838 }, 00:04:27.838 "memory_domains": [ 00:04:27.838 { 00:04:27.838 "dma_device_id": "system", 00:04:27.838 "dma_device_type": 1 00:04:27.838 }, 00:04:27.838 { 00:04:27.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.838 "dma_device_type": 2 00:04:27.838 } 00:04:27.838 ], 00:04:27.838 "driver_specific": { 00:04:27.838 "passthru": { 00:04:27.838 "name": "Passthru0", 00:04:27.838 "base_bdev_name": "Malloc2" 00:04:27.838 } 00:04:27.838 } 00:04:27.838 } 00:04:27.838 ]' 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.838 00:04:27.838 real 0m0.213s 00:04:27.838 user 0m0.133s 00:04:27.838 sys 0m0.024s 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.838 08:55:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.838 ************************************ 00:04:27.838 END TEST rpc_daemon_integrity 00:04:27.838 ************************************ 00:04:27.838 08:55:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:27.838 08:55:04 rpc -- rpc/rpc.sh@84 -- # killprocess 3199834 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@952 -- # '[' -z 3199834 ']' 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@956 -- # kill -0 3199834 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@957 -- # uname 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3199834 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3199834' 00:04:27.838 killing process with pid 3199834 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@971 -- # kill 3199834 00:04:27.838 08:55:04 rpc -- common/autotest_common.sh@976 -- # wait 3199834 00:04:28.403 00:04:28.403 real 0m1.972s 00:04:28.403 user 0m2.430s 00:04:28.403 sys 0m0.620s 00:04:28.403 08:55:05 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.403 08:55:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.403 ************************************ 00:04:28.403 END TEST rpc 00:04:28.403 ************************************ 00:04:28.403 08:55:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.403 08:55:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.403 08:55:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.403 08:55:05 -- common/autotest_common.sh@10 -- # set +x 00:04:28.403 ************************************ 00:04:28.403 START TEST skip_rpc 00:04:28.403 ************************************ 00:04:28.403 08:55:05 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:28.403 * Looking for test storage... 00:04:28.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.404 08:55:05 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.404 08:55:05 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.404 08:55:05 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.663 08:55:05 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.663 08:55:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:28.663 08:55:05 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.663 08:55:05 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:28.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.663 --rc genhtml_branch_coverage=1 00:04:28.663 --rc genhtml_function_coverage=1 00:04:28.663 --rc genhtml_legend=1 00:04:28.663 --rc geninfo_all_blocks=1 00:04:28.663 --rc geninfo_unexecuted_blocks=1 00:04:28.663 00:04:28.663 ' 00:04:28.663 08:55:05 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:28.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.663 --rc genhtml_branch_coverage=1 00:04:28.663 --rc genhtml_function_coverage=1 00:04:28.663 --rc genhtml_legend=1 00:04:28.663 --rc geninfo_all_blocks=1 00:04:28.663 --rc geninfo_unexecuted_blocks=1 00:04:28.663 00:04:28.663 ' 00:04:28.663 08:55:05 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:28.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.663 --rc genhtml_branch_coverage=1 00:04:28.663 --rc genhtml_function_coverage=1 00:04:28.663 --rc genhtml_legend=1 00:04:28.663 --rc geninfo_all_blocks=1 00:04:28.663 --rc geninfo_unexecuted_blocks=1 00:04:28.663 00:04:28.663 ' 00:04:28.663 08:55:05 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:28.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.663 --rc genhtml_branch_coverage=1 00:04:28.663 --rc genhtml_function_coverage=1 00:04:28.663 --rc genhtml_legend=1 00:04:28.663 --rc geninfo_all_blocks=1 00:04:28.663 --rc geninfo_unexecuted_blocks=1 00:04:28.663 00:04:28.663 ' 00:04:28.663 08:55:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.663 08:55:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:28.663 08:55:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:28.663 08:55:05 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.663 08:55:05 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.663 08:55:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.663 ************************************ 00:04:28.663 START TEST skip_rpc 00:04:28.663 ************************************ 00:04:28.663 08:55:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:28.663 08:55:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3200173 00:04:28.663 08:55:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:28.663 08:55:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.663 08:55:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:28.663 [2024-11-20 08:55:05.569007] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:04:28.663 [2024-11-20 08:55:05.569087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200173 ] 00:04:28.663 [2024-11-20 08:55:05.639217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.921 [2024-11-20 08:55:05.698989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.183 08:55:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:34.183 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:34.183 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:34.183 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:34.183 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.183 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:34.183 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.183 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:34.183 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3200173 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3200173 ']' 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3200173 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3200173 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3200173' 00:04:34.184 killing process with pid 3200173 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3200173 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3200173 00:04:34.184 00:04:34.184 real 0m5.471s 00:04:34.184 user 0m5.160s 00:04:34.184 sys 0m0.322s 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:34.184 08:55:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.184 ************************************ 00:04:34.184 END TEST skip_rpc 00:04:34.184 ************************************ 00:04:34.184 08:55:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:34.184 08:55:11 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:34.184 08:55:11 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:34.184 08:55:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.184 ************************************ 00:04:34.184 START TEST skip_rpc_with_json 00:04:34.184 ************************************ 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3200856 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3200856 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3200856 ']' 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:34.184 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.184 [2024-11-20 08:55:11.090152] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:04:34.184 [2024-11-20 08:55:11.090232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200856 ] 00:04:34.184 [2024-11-20 08:55:11.157055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.442 [2024-11-20 08:55:11.219540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.700 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.701 [2024-11-20 08:55:11.492758] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.701 request: 00:04:34.701 { 00:04:34.701 "trtype": "tcp", 00:04:34.701 "method": "nvmf_get_transports", 00:04:34.701 "req_id": 1 00:04:34.701 } 00:04:34.701 Got JSON-RPC error response 00:04:34.701 response: 00:04:34.701 { 00:04:34.701 "code": -19, 00:04:34.701 "message": "No such device" 00:04:34.701 } 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.701 [2024-11-20 08:55:11.500869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.701 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:34.701 { 00:04:34.701 "subsystems": [ 00:04:34.701 { 00:04:34.701 "subsystem": "fsdev", 00:04:34.701 "config": [ 00:04:34.701 { 00:04:34.701 "method": "fsdev_set_opts", 00:04:34.701 "params": { 00:04:34.701 "fsdev_io_pool_size": 65535, 00:04:34.701 "fsdev_io_cache_size": 256 00:04:34.701 } 00:04:34.701 } 00:04:34.701 ] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "vfio_user_target", 00:04:34.701 "config": null 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "keyring", 00:04:34.701 "config": [] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "iobuf", 00:04:34.701 "config": [ 00:04:34.701 { 00:04:34.701 "method": "iobuf_set_options", 00:04:34.701 "params": { 00:04:34.701 "small_pool_count": 8192, 00:04:34.701 "large_pool_count": 1024, 00:04:34.701 "small_bufsize": 8192, 00:04:34.701 "large_bufsize": 135168, 00:04:34.701 "enable_numa": false 00:04:34.701 } 00:04:34.701 } 00:04:34.701 ] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "sock", 00:04:34.701 "config": [ 00:04:34.701 { 00:04:34.701 "method": "sock_set_default_impl", 00:04:34.701 "params": { 00:04:34.701 "impl_name": "posix" 00:04:34.701 } 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "method": "sock_impl_set_options", 00:04:34.701 "params": { 00:04:34.701 "impl_name": "ssl", 00:04:34.701 "recv_buf_size": 4096, 00:04:34.701 "send_buf_size": 4096, 00:04:34.701 "enable_recv_pipe": true, 00:04:34.701 "enable_quickack": false, 00:04:34.701 "enable_placement_id": 0, 00:04:34.701 "enable_zerocopy_send_server": true, 00:04:34.701 "enable_zerocopy_send_client": false, 00:04:34.701 "zerocopy_threshold": 0, 00:04:34.701 "tls_version": 0, 00:04:34.701 "enable_ktls": false 00:04:34.701 } 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "method": "sock_impl_set_options", 00:04:34.701 "params": { 00:04:34.701 "impl_name": "posix", 00:04:34.701 "recv_buf_size": 2097152, 00:04:34.701 "send_buf_size": 2097152, 00:04:34.701 "enable_recv_pipe": true, 00:04:34.701 "enable_quickack": false, 00:04:34.701 "enable_placement_id": 0, 00:04:34.701 "enable_zerocopy_send_server": true, 00:04:34.701 "enable_zerocopy_send_client": false, 00:04:34.701 "zerocopy_threshold": 0, 00:04:34.701 "tls_version": 0, 00:04:34.701 "enable_ktls": false 00:04:34.701 } 00:04:34.701 } 00:04:34.701 ] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "vmd", 00:04:34.701 "config": [] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "accel", 00:04:34.701 "config": [ 00:04:34.701 { 00:04:34.701 "method": "accel_set_options", 00:04:34.701 "params": { 00:04:34.701 "small_cache_size": 128, 00:04:34.701 "large_cache_size": 16, 00:04:34.701 "task_count": 2048, 00:04:34.701 "sequence_count": 2048, 00:04:34.701 "buf_count": 2048 00:04:34.701 } 00:04:34.701 } 00:04:34.701 ] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "bdev", 00:04:34.701 "config": [ 00:04:34.701 { 00:04:34.701 "method": "bdev_set_options", 00:04:34.701 "params": { 00:04:34.701 "bdev_io_pool_size": 65535, 00:04:34.701 "bdev_io_cache_size": 256, 00:04:34.701 "bdev_auto_examine": true, 00:04:34.701 "iobuf_small_cache_size": 128, 00:04:34.701 "iobuf_large_cache_size": 16 00:04:34.701 } 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "method": "bdev_raid_set_options", 00:04:34.701 "params": { 00:04:34.701 "process_window_size_kb": 1024, 00:04:34.701 "process_max_bandwidth_mb_sec": 0 00:04:34.701 } 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "method": "bdev_iscsi_set_options", 00:04:34.701 "params": { 00:04:34.701 "timeout_sec": 30 00:04:34.701 } 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "method": "bdev_nvme_set_options", 00:04:34.701 "params": { 00:04:34.701 "action_on_timeout": "none", 00:04:34.701 "timeout_us": 0, 00:04:34.701 "timeout_admin_us": 0, 00:04:34.701 "keep_alive_timeout_ms": 10000, 00:04:34.701 "arbitration_burst": 0, 00:04:34.701 "low_priority_weight": 0, 00:04:34.701 "medium_priority_weight": 0, 00:04:34.701 "high_priority_weight": 0, 00:04:34.701 "nvme_adminq_poll_period_us": 10000, 00:04:34.701 "nvme_ioq_poll_period_us": 0, 00:04:34.701 "io_queue_requests": 0, 00:04:34.701 "delay_cmd_submit": true, 00:04:34.701 "transport_retry_count": 4, 00:04:34.701 "bdev_retry_count": 3, 00:04:34.701 "transport_ack_timeout": 0, 00:04:34.701 "ctrlr_loss_timeout_sec": 0, 00:04:34.701 "reconnect_delay_sec": 0, 00:04:34.701 "fast_io_fail_timeout_sec": 0, 00:04:34.701 "disable_auto_failback": false, 00:04:34.701 "generate_uuids": false, 00:04:34.701 "transport_tos": 0, 00:04:34.701 "nvme_error_stat": false, 00:04:34.701 "rdma_srq_size": 0, 00:04:34.701 "io_path_stat": false, 00:04:34.701 "allow_accel_sequence": false, 00:04:34.701 "rdma_max_cq_size": 0, 00:04:34.701 "rdma_cm_event_timeout_ms": 0, 00:04:34.701 "dhchap_digests": [ 00:04:34.701 "sha256", 00:04:34.701 "sha384", 00:04:34.701 "sha512" 00:04:34.701 ], 00:04:34.701 "dhchap_dhgroups": [ 00:04:34.701 "null", 00:04:34.701 "ffdhe2048", 00:04:34.701 "ffdhe3072", 00:04:34.701 "ffdhe4096", 00:04:34.701 "ffdhe6144", 00:04:34.701 "ffdhe8192" 00:04:34.701 ] 00:04:34.701 } 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "method": "bdev_nvme_set_hotplug", 00:04:34.701 "params": { 00:04:34.701 "period_us": 100000, 00:04:34.701 "enable": false 00:04:34.701 } 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "method": "bdev_wait_for_examine" 00:04:34.701 } 00:04:34.701 ] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "scsi", 00:04:34.701 "config": null 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "scheduler", 00:04:34.701 "config": [ 00:04:34.701 { 00:04:34.701 "method": "framework_set_scheduler", 00:04:34.701 "params": { 00:04:34.701 "name": "static" 00:04:34.701 } 00:04:34.701 } 00:04:34.701 ] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "vhost_scsi", 00:04:34.701 "config": [] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "vhost_blk", 00:04:34.701 "config": [] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "ublk", 00:04:34.701 "config": [] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "nbd", 00:04:34.701 "config": [] 00:04:34.701 }, 00:04:34.701 { 00:04:34.701 "subsystem": "nvmf", 00:04:34.701 "config": [ 00:04:34.701 { 00:04:34.701 "method": "nvmf_set_config", 00:04:34.701 "params": { 00:04:34.701 "discovery_filter": "match_any", 00:04:34.701 "admin_cmd_passthru": { 00:04:34.701 "identify_ctrlr": false 00:04:34.701 }, 00:04:34.701 "dhchap_digests": [ 00:04:34.701 "sha256", 00:04:34.701 "sha384", 00:04:34.701 "sha512" 00:04:34.701 ], 00:04:34.701 "dhchap_dhgroups": [ 00:04:34.701 "null", 00:04:34.701 "ffdhe2048", 00:04:34.701 "ffdhe3072", 00:04:34.701 "ffdhe4096", 00:04:34.701 "ffdhe6144", 00:04:34.701 "ffdhe8192" 00:04:34.701 ] 00:04:34.701 } 00:04:34.701 }, 00:04:34.701 { 00:04:34.702 "method": "nvmf_set_max_subsystems", 00:04:34.702 "params": { 00:04:34.702 "max_subsystems": 1024 00:04:34.702 } 00:04:34.702 }, 00:04:34.702 { 00:04:34.702 "method": "nvmf_set_crdt", 00:04:34.702 "params": { 00:04:34.702 "crdt1": 0, 00:04:34.702 "crdt2": 0, 00:04:34.702 "crdt3": 0 00:04:34.702 } 00:04:34.702 }, 00:04:34.702 { 00:04:34.702 "method": "nvmf_create_transport", 00:04:34.702 "params": { 00:04:34.702 "trtype": "TCP", 00:04:34.702 "max_queue_depth": 128, 00:04:34.702 "max_io_qpairs_per_ctrlr": 127, 00:04:34.702 "in_capsule_data_size": 4096, 00:04:34.702 "max_io_size": 131072, 00:04:34.702 "io_unit_size": 131072, 00:04:34.702 "max_aq_depth": 128, 00:04:34.702 "num_shared_buffers": 511, 00:04:34.702 "buf_cache_size": 4294967295, 00:04:34.702 "dif_insert_or_strip": false, 00:04:34.702 "zcopy": false, 00:04:34.702 "c2h_success": true, 00:04:34.702 "sock_priority": 0, 00:04:34.702 "abort_timeout_sec": 1, 00:04:34.702 "ack_timeout": 0, 00:04:34.702 "data_wr_pool_size": 0 00:04:34.702 } 00:04:34.702 } 00:04:34.702 ] 00:04:34.702 }, 00:04:34.702 { 00:04:34.702 "subsystem": "iscsi", 00:04:34.702 "config": [ 00:04:34.702 { 00:04:34.702 "method": "iscsi_set_options", 00:04:34.702 "params": { 00:04:34.702 "node_base": "iqn.2016-06.io.spdk", 00:04:34.702 "max_sessions": 128, 00:04:34.702 "max_connections_per_session": 2, 00:04:34.702 "max_queue_depth": 64, 00:04:34.702 "default_time2wait": 2, 00:04:34.702 "default_time2retain": 20, 00:04:34.702 "first_burst_length": 8192, 00:04:34.702 "immediate_data": true, 00:04:34.702 "allow_duplicated_isid": false, 00:04:34.702 "error_recovery_level": 0, 00:04:34.702 "nop_timeout": 60, 00:04:34.702 "nop_in_interval": 30, 00:04:34.702 "disable_chap": false, 00:04:34.702 "require_chap": false, 00:04:34.702 "mutual_chap": false, 00:04:34.702 "chap_group": 0, 00:04:34.702 "max_large_datain_per_connection": 64, 00:04:34.702 "max_r2t_per_connection": 4, 00:04:34.702 "pdu_pool_size": 36864, 00:04:34.702 "immediate_data_pool_size": 16384, 00:04:34.702 "data_out_pool_size": 2048 00:04:34.702 } 00:04:34.702 } 00:04:34.702 ] 00:04:34.702 } 00:04:34.702 ] 00:04:34.702 } 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3200856 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3200856 ']' 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3200856 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3200856 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3200856' 00:04:34.702 killing process with pid 3200856 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3200856 00:04:34.702 08:55:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3200856 00:04:35.269 08:55:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3200996 00:04:35.269 08:55:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.269 08:55:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3200996 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3200996 ']' 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3200996 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3200996 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3200996' 00:04:40.530 killing process with pid 3200996 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3200996 00:04:40.530 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3200996 00:04:40.788 08:55:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.788 08:55:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:40.788 00:04:40.788 real 0m6.565s 00:04:40.788 user 0m6.210s 00:04:40.788 sys 0m0.694s 00:04:40.788 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.788 08:55:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.788 ************************************ 00:04:40.788 END TEST skip_rpc_with_json 00:04:40.788 ************************************ 00:04:40.788 08:55:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.788 08:55:17 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.788 08:55:17 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.789 08:55:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.789 ************************************ 00:04:40.789 START TEST skip_rpc_with_delay 00:04:40.789 ************************************ 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.789 [2024-11-20 08:55:17.710849] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.789 00:04:40.789 real 0m0.075s 00:04:40.789 user 0m0.048s 00:04:40.789 sys 0m0.027s 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.789 08:55:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.789 ************************************ 00:04:40.789 END TEST skip_rpc_with_delay 00:04:40.789 ************************************ 00:04:40.789 08:55:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.789 08:55:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.789 08:55:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.789 08:55:17 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.789 08:55:17 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.789 08:55:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.789 ************************************ 00:04:40.789 START TEST exit_on_failed_rpc_init 00:04:40.789 ************************************ 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3201720 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3201720 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3201720 ']' 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.789 08:55:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.048 [2024-11-20 08:55:17.831513] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:04:41.048 [2024-11-20 08:55:17.831614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201720 ] 00:04:41.048 [2024-11-20 08:55:17.896181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.048 [2024-11-20 08:55:17.952130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:41.306 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.306 [2024-11-20 08:55:18.278175] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:04:41.306 [2024-11-20 08:55:18.278261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201845 ] 00:04:41.564 [2024-11-20 08:55:18.344023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.564 [2024-11-20 08:55:18.404152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.564 [2024-11-20 08:55:18.404269] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:41.564 [2024-11-20 08:55:18.404311] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:41.564 [2024-11-20 08:55:18.404325] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3201720 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3201720 ']' 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3201720 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3201720 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3201720' 00:04:41.564 killing process with pid 3201720 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3201720 00:04:41.564 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3201720 00:04:42.130 00:04:42.130 real 0m1.164s 00:04:42.130 user 0m1.273s 00:04:42.130 sys 0m0.435s 00:04:42.130 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.130 08:55:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.130 ************************************ 00:04:42.130 END TEST exit_on_failed_rpc_init 00:04:42.130 ************************************ 00:04:42.130 08:55:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.130 00:04:42.130 real 0m13.629s 00:04:42.130 user 0m12.871s 00:04:42.130 sys 0m1.672s 00:04:42.130 08:55:18 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.130 08:55:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.130 ************************************ 00:04:42.130 END TEST skip_rpc 00:04:42.130 ************************************ 00:04:42.130 08:55:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:42.130 08:55:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.130 08:55:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.130 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:04:42.130 ************************************ 00:04:42.130 START TEST rpc_client 00:04:42.130 ************************************ 00:04:42.130 08:55:19 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:42.130 * Looking for test storage... 00:04:42.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:42.130 08:55:19 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.130 08:55:19 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.130 08:55:19 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.130 08:55:19 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.130 08:55:19 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:42.392 08:55:19 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.392 08:55:19 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:42.392 08:55:19 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:42.392 08:55:19 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.392 08:55:19 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:42.392 08:55:19 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.392 08:55:19 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.392 08:55:19 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.392 08:55:19 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:42.392 08:55:19 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.392 08:55:19 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.392 --rc genhtml_branch_coverage=1 00:04:42.392 --rc genhtml_function_coverage=1 00:04:42.392 --rc genhtml_legend=1 00:04:42.392 --rc geninfo_all_blocks=1 00:04:42.392 --rc geninfo_unexecuted_blocks=1 00:04:42.392 00:04:42.392 ' 00:04:42.392 08:55:19 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.392 --rc genhtml_branch_coverage=1 00:04:42.392 --rc genhtml_function_coverage=1 00:04:42.392 --rc genhtml_legend=1 00:04:42.392 --rc geninfo_all_blocks=1 00:04:42.392 --rc geninfo_unexecuted_blocks=1 00:04:42.392 00:04:42.392 ' 00:04:42.392 08:55:19 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.392 --rc genhtml_branch_coverage=1 00:04:42.392 --rc genhtml_function_coverage=1 00:04:42.392 --rc genhtml_legend=1 00:04:42.392 --rc geninfo_all_blocks=1 00:04:42.392 --rc geninfo_unexecuted_blocks=1 00:04:42.392 00:04:42.392 ' 00:04:42.392 08:55:19 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.392 --rc genhtml_branch_coverage=1 00:04:42.392 --rc genhtml_function_coverage=1 00:04:42.392 --rc genhtml_legend=1 00:04:42.392 --rc geninfo_all_blocks=1 00:04:42.392 --rc geninfo_unexecuted_blocks=1 00:04:42.392 00:04:42.392 ' 00:04:42.392 08:55:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:42.392 OK 00:04:42.392 08:55:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.392 00:04:42.392 real 0m0.161s 00:04:42.392 user 0m0.114s 00:04:42.392 sys 0m0.057s 00:04:42.392 08:55:19 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.392 08:55:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:42.392 ************************************ 00:04:42.392 END TEST rpc_client 00:04:42.392 ************************************ 00:04:42.392 08:55:19 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.392 08:55:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.392 08:55:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.392 08:55:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.392 ************************************ 00:04:42.392 START TEST json_config 00:04:42.392 ************************************ 00:04:42.392 08:55:19 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:42.392 08:55:19 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.392 08:55:19 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.393 08:55:19 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.393 08:55:19 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.393 08:55:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.393 08:55:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.393 08:55:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.393 08:55:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.393 08:55:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.393 08:55:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.393 08:55:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.393 08:55:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.393 08:55:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.393 08:55:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.393 08:55:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.393 08:55:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:42.393 08:55:19 json_config -- scripts/common.sh@345 -- # : 1 00:04:42.393 08:55:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.393 08:55:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.393 08:55:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:42.393 08:55:19 json_config -- scripts/common.sh@353 -- # local d=1 00:04:42.393 08:55:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.393 08:55:19 json_config -- scripts/common.sh@355 -- # echo 1 00:04:42.393 08:55:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.393 08:55:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:42.393 08:55:19 json_config -- scripts/common.sh@353 -- # local d=2 00:04:42.393 08:55:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.393 08:55:19 json_config -- scripts/common.sh@355 -- # echo 2 00:04:42.393 08:55:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.393 08:55:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.393 08:55:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.393 08:55:19 json_config -- scripts/common.sh@368 -- # return 0 00:04:42.393 08:55:19 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.393 08:55:19 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.393 --rc genhtml_branch_coverage=1 00:04:42.393 --rc genhtml_function_coverage=1 00:04:42.393 --rc genhtml_legend=1 00:04:42.393 --rc geninfo_all_blocks=1 00:04:42.393 --rc geninfo_unexecuted_blocks=1 00:04:42.393 00:04:42.393 ' 00:04:42.393 08:55:19 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.393 --rc genhtml_branch_coverage=1 00:04:42.393 --rc genhtml_function_coverage=1 00:04:42.393 --rc genhtml_legend=1 00:04:42.393 --rc geninfo_all_blocks=1 00:04:42.393 --rc geninfo_unexecuted_blocks=1 00:04:42.393 00:04:42.393 ' 00:04:42.393 08:55:19 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.393 --rc genhtml_branch_coverage=1 00:04:42.393 --rc genhtml_function_coverage=1 00:04:42.393 --rc genhtml_legend=1 00:04:42.393 --rc geninfo_all_blocks=1 00:04:42.393 --rc geninfo_unexecuted_blocks=1 00:04:42.393 00:04:42.393 ' 00:04:42.393 08:55:19 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.393 --rc genhtml_branch_coverage=1 00:04:42.393 --rc genhtml_function_coverage=1 00:04:42.393 --rc genhtml_legend=1 00:04:42.393 --rc geninfo_all_blocks=1 00:04:42.393 --rc geninfo_unexecuted_blocks=1 00:04:42.393 00:04:42.393 ' 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.393 08:55:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.393 08:55:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.393 08:55:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.393 08:55:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.393 08:55:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.393 08:55:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.393 08:55:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.393 08:55:19 json_config -- paths/export.sh@5 -- # export PATH 00:04:42.393 08:55:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@51 -- # : 0 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.393 08:55:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:42.393 08:55:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:42.394 08:55:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:42.394 08:55:19 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.394 08:55:19 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:42.394 INFO: JSON configuration test init 00:04:42.394 08:55:19 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:42.394 08:55:19 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.394 08:55:19 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.394 08:55:19 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:42.394 08:55:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.394 08:55:19 json_config -- json_config/common.sh@10 -- # shift 00:04:42.394 08:55:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.394 08:55:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.394 08:55:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.394 08:55:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.394 08:55:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.394 08:55:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3202105 00:04:42.394 08:55:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:42.394 08:55:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.394 Waiting for target to run... 00:04:42.394 08:55:19 json_config -- json_config/common.sh@25 -- # waitforlisten 3202105 /var/tmp/spdk_tgt.sock 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@833 -- # '[' -z 3202105 ']' 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.394 08:55:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.653 [2024-11-20 08:55:19.432099] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:04:42.653 [2024-11-20 08:55:19.432194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202105 ] 00:04:43.221 [2024-11-20 08:55:20.015631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.221 [2024-11-20 08:55:20.071724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.479 08:55:20 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.479 08:55:20 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:43.479 08:55:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.479 00:04:43.479 08:55:20 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:43.479 08:55:20 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:43.479 08:55:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:43.479 08:55:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.479 08:55:20 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:43.479 08:55:20 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:43.479 08:55:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.479 08:55:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.479 08:55:20 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:43.479 08:55:20 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:43.479 08:55:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:46.758 08:55:23 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:46.759 08:55:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:46.759 08:55:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.759 08:55:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.759 08:55:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:46.759 08:55:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:46.759 08:55:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:46.759 08:55:23 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:46.759 08:55:23 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:46.759 08:55:23 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:46.759 08:55:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:46.759 08:55:23 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@54 -- # sort 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:47.016 08:55:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.016 08:55:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:47.016 08:55:23 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.016 08:55:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:47.016 08:55:23 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.016 08:55:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.274 MallocForNvmf0 00:04:47.274 08:55:24 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.274 08:55:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.532 MallocForNvmf1 00:04:47.532 08:55:24 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.532 08:55:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.789 [2024-11-20 08:55:24.723826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.789 08:55:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.789 08:55:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.047 08:55:25 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.047 08:55:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.304 08:55:25 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.305 08:55:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.562 08:55:25 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:48.562 08:55:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:48.820 [2024-11-20 08:55:25.771112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.820 08:55:25 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:48.820 08:55:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.820 08:55:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.820 08:55:25 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:48.820 08:55:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.820 08:55:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.820 08:55:25 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:48.820 08:55:25 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.820 08:55:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:49.078 MallocBdevForConfigChangeCheck 00:04:49.078 08:55:26 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:49.078 08:55:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:49.078 08:55:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.336 08:55:26 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:49.336 08:55:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.594 08:55:26 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:49.594 INFO: shutting down applications... 00:04:49.594 08:55:26 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:49.594 08:55:26 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:49.594 08:55:26 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:49.594 08:55:26 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:51.489 Calling clear_iscsi_subsystem 00:04:51.489 Calling clear_nvmf_subsystem 00:04:51.489 Calling clear_nbd_subsystem 00:04:51.489 Calling clear_ublk_subsystem 00:04:51.489 Calling clear_vhost_blk_subsystem 00:04:51.489 Calling clear_vhost_scsi_subsystem 00:04:51.489 Calling clear_bdev_subsystem 00:04:51.489 08:55:28 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:51.489 08:55:28 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:51.489 08:55:28 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:51.489 08:55:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.489 08:55:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:51.490 08:55:28 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:51.747 08:55:28 json_config -- json_config/json_config.sh@352 -- # break 00:04:51.747 08:55:28 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:51.747 08:55:28 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:51.747 08:55:28 json_config -- json_config/common.sh@31 -- # local app=target 00:04:51.747 08:55:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.747 08:55:28 json_config -- json_config/common.sh@35 -- # [[ -n 3202105 ]] 00:04:51.747 08:55:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3202105 00:04:51.747 08:55:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.747 08:55:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.747 08:55:28 json_config -- json_config/common.sh@41 -- # kill -0 3202105 00:04:51.747 08:55:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.316 08:55:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.316 08:55:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.316 08:55:29 json_config -- json_config/common.sh@41 -- # kill -0 3202105 00:04:52.316 08:55:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:52.316 08:55:29 json_config -- json_config/common.sh@43 -- # break 00:04:52.316 08:55:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:52.316 08:55:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:52.316 SPDK target shutdown done 00:04:52.316 08:55:29 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:52.316 INFO: relaunching applications... 00:04:52.316 08:55:29 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.316 08:55:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:52.316 08:55:29 json_config -- json_config/common.sh@10 -- # shift 00:04:52.316 08:55:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:52.316 08:55:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:52.316 08:55:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:52.316 08:55:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.316 08:55:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.316 08:55:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3203308 00:04:52.316 08:55:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.316 08:55:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:52.316 Waiting for target to run... 00:04:52.316 08:55:29 json_config -- json_config/common.sh@25 -- # waitforlisten 3203308 /var/tmp/spdk_tgt.sock 00:04:52.316 08:55:29 json_config -- common/autotest_common.sh@833 -- # '[' -z 3203308 ']' 00:04:52.316 08:55:29 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.316 08:55:29 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.316 08:55:29 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.316 08:55:29 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.316 08:55:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.316 [2024-11-20 08:55:29.130691] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:04:52.316 [2024-11-20 08:55:29.130771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3203308 ] 00:04:52.884 [2024-11-20 08:55:29.670547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.884 [2024-11-20 08:55:29.719608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.168 [2024-11-20 08:55:32.771904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.168 [2024-11-20 08:55:32.804347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:56.168 08:55:32 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.168 08:55:32 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:56.168 08:55:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:56.168 00:04:56.168 08:55:32 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:56.168 08:55:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:56.168 INFO: Checking if target configuration is the same... 00:04:56.168 08:55:32 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.168 08:55:32 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:56.168 08:55:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.168 + '[' 2 -ne 2 ']' 00:04:56.168 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:56.168 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:56.168 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:56.168 +++ basename /dev/fd/62 00:04:56.168 ++ mktemp /tmp/62.XXX 00:04:56.168 + tmp_file_1=/tmp/62.4Ch 00:04:56.168 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.168 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:56.168 + tmp_file_2=/tmp/spdk_tgt_config.json.JKX 00:04:56.168 + ret=0 00:04:56.168 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.425 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.425 + diff -u /tmp/62.4Ch /tmp/spdk_tgt_config.json.JKX 00:04:56.425 + echo 'INFO: JSON config files are the same' 00:04:56.425 INFO: JSON config files are the same 00:04:56.425 + rm /tmp/62.4Ch /tmp/spdk_tgt_config.json.JKX 00:04:56.425 + exit 0 00:04:56.425 08:55:33 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:56.425 08:55:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:56.425 INFO: changing configuration and checking if this can be detected... 00:04:56.425 08:55:33 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:56.425 08:55:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:56.683 08:55:33 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.683 08:55:33 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:56.683 08:55:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.683 + '[' 2 -ne 2 ']' 00:04:56.683 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:56.683 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:56.683 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:56.683 +++ basename /dev/fd/62 00:04:56.683 ++ mktemp /tmp/62.XXX 00:04:56.683 + tmp_file_1=/tmp/62.uYd 00:04:56.683 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.683 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:56.683 + tmp_file_2=/tmp/spdk_tgt_config.json.qpM 00:04:56.683 + ret=0 00:04:56.683 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.249 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.249 + diff -u /tmp/62.uYd /tmp/spdk_tgt_config.json.qpM 00:04:57.249 + ret=1 00:04:57.249 + echo '=== Start of file: /tmp/62.uYd ===' 00:04:57.249 + cat /tmp/62.uYd 00:04:57.249 + echo '=== End of file: /tmp/62.uYd ===' 00:04:57.249 + echo '' 00:04:57.249 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qpM ===' 00:04:57.249 + cat /tmp/spdk_tgt_config.json.qpM 00:04:57.249 + echo '=== End of file: /tmp/spdk_tgt_config.json.qpM ===' 00:04:57.249 + echo '' 00:04:57.249 + rm /tmp/62.uYd /tmp/spdk_tgt_config.json.qpM 00:04:57.249 + exit 1 00:04:57.249 08:55:34 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:57.249 INFO: configuration change detected. 00:04:57.249 08:55:34 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:57.249 08:55:34 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:57.249 08:55:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.249 08:55:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.249 08:55:34 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:57.249 08:55:34 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:57.249 08:55:34 json_config -- json_config/json_config.sh@324 -- # [[ -n 3203308 ]] 00:04:57.250 08:55:34 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:57.250 08:55:34 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.250 08:55:34 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:57.250 08:55:34 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:57.250 08:55:34 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:57.250 08:55:34 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:57.250 08:55:34 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:57.250 08:55:34 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.250 08:55:34 json_config -- json_config/json_config.sh@330 -- # killprocess 3203308 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@952 -- # '[' -z 3203308 ']' 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@956 -- # kill -0 3203308 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@957 -- # uname 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3203308 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3203308' 00:04:57.250 killing process with pid 3203308 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@971 -- # kill 3203308 00:04:57.250 08:55:34 json_config -- common/autotest_common.sh@976 -- # wait 3203308 00:04:58.625 08:55:35 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.884 08:55:35 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:58.884 08:55:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.884 08:55:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.884 08:55:35 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:58.884 08:55:35 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:58.884 INFO: Success 00:04:58.884 00:04:58.884 real 0m16.452s 00:04:58.884 user 0m17.780s 00:04:58.884 sys 0m2.879s 00:04:58.884 08:55:35 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.884 08:55:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.884 ************************************ 00:04:58.884 END TEST json_config 00:04:58.884 ************************************ 00:04:58.884 08:55:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.884 08:55:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.884 08:55:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.884 08:55:35 -- common/autotest_common.sh@10 -- # set +x 00:04:58.884 ************************************ 00:04:58.884 START TEST json_config_extra_key 00:04:58.884 ************************************ 00:04:58.884 08:55:35 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:58.884 08:55:35 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.884 08:55:35 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.884 08:55:35 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.884 08:55:35 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.884 08:55:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.884 08:55:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.884 08:55:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.884 08:55:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.884 08:55:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.884 08:55:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.884 08:55:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.884 08:55:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.885 08:55:35 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.885 08:55:35 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.885 --rc genhtml_branch_coverage=1 00:04:58.885 --rc genhtml_function_coverage=1 00:04:58.885 --rc genhtml_legend=1 00:04:58.885 --rc geninfo_all_blocks=1 00:04:58.885 --rc geninfo_unexecuted_blocks=1 00:04:58.885 00:04:58.885 ' 00:04:58.885 08:55:35 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.885 --rc genhtml_branch_coverage=1 00:04:58.885 --rc genhtml_function_coverage=1 00:04:58.885 --rc genhtml_legend=1 00:04:58.885 --rc geninfo_all_blocks=1 00:04:58.885 --rc geninfo_unexecuted_blocks=1 00:04:58.885 00:04:58.885 ' 00:04:58.885 08:55:35 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.885 --rc genhtml_branch_coverage=1 00:04:58.885 --rc genhtml_function_coverage=1 00:04:58.885 --rc genhtml_legend=1 00:04:58.885 --rc geninfo_all_blocks=1 00:04:58.885 --rc geninfo_unexecuted_blocks=1 00:04:58.885 00:04:58.885 ' 00:04:58.885 08:55:35 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.885 --rc genhtml_branch_coverage=1 00:04:58.885 --rc genhtml_function_coverage=1 00:04:58.885 --rc genhtml_legend=1 00:04:58.885 --rc geninfo_all_blocks=1 00:04:58.885 --rc geninfo_unexecuted_blocks=1 00:04:58.885 00:04:58.885 ' 00:04:58.885 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.885 08:55:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.885 08:55:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.885 08:55:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.885 08:55:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.885 08:55:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.885 08:55:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.885 08:55:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.885 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:58.885 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.885 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.885 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.886 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.886 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.886 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.886 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:58.886 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.886 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.886 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.886 INFO: launching applications... 00:04:58.886 08:55:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3204231 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.886 Waiting for target to run... 00:04:58.886 08:55:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3204231 /var/tmp/spdk_tgt.sock 00:04:58.886 08:55:35 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3204231 ']' 00:04:58.886 08:55:35 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.886 08:55:35 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.886 08:55:35 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.886 08:55:35 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.886 08:55:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.146 [2024-11-20 08:55:35.921664] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:04:59.146 [2024-11-20 08:55:35.921756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204231 ] 00:04:59.712 [2024-11-20 08:55:36.444081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.712 [2024-11-20 08:55:36.495754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.970 08:55:36 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.970 08:55:36 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:59.970 08:55:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:59.970 00:04:59.970 08:55:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.970 INFO: shutting down applications... 00:04:59.970 08:55:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.970 08:55:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:59.970 08:55:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.970 08:55:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3204231 ]] 00:04:59.970 08:55:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3204231 00:04:59.970 08:55:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.970 08:55:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.970 08:55:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3204231 00:04:59.970 08:55:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.535 08:55:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.535 08:55:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.535 08:55:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3204231 00:05:00.535 08:55:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.535 08:55:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:00.535 08:55:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.535 08:55:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.535 SPDK target shutdown done 00:05:00.536 08:55:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:00.536 Success 00:05:00.536 00:05:00.536 real 0m1.689s 00:05:00.536 user 0m1.526s 00:05:00.536 sys 0m0.632s 00:05:00.536 08:55:37 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.536 08:55:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:00.536 ************************************ 00:05:00.536 END TEST json_config_extra_key 00:05:00.536 ************************************ 00:05:00.536 08:55:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.536 08:55:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:00.536 08:55:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.536 08:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:00.536 ************************************ 00:05:00.536 START TEST alias_rpc 00:05:00.536 ************************************ 00:05:00.536 08:55:37 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:00.536 * Looking for test storage... 00:05:00.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:00.536 08:55:37 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:00.536 08:55:37 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:00.536 08:55:37 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.794 08:55:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:00.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.794 --rc genhtml_branch_coverage=1 00:05:00.794 --rc genhtml_function_coverage=1 00:05:00.794 --rc genhtml_legend=1 00:05:00.794 --rc geninfo_all_blocks=1 00:05:00.794 --rc geninfo_unexecuted_blocks=1 00:05:00.794 00:05:00.794 ' 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:00.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.794 --rc genhtml_branch_coverage=1 00:05:00.794 --rc genhtml_function_coverage=1 00:05:00.794 --rc genhtml_legend=1 00:05:00.794 --rc geninfo_all_blocks=1 00:05:00.794 --rc geninfo_unexecuted_blocks=1 00:05:00.794 00:05:00.794 ' 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:00.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.794 --rc genhtml_branch_coverage=1 00:05:00.794 --rc genhtml_function_coverage=1 00:05:00.794 --rc genhtml_legend=1 00:05:00.794 --rc geninfo_all_blocks=1 00:05:00.794 --rc geninfo_unexecuted_blocks=1 00:05:00.794 00:05:00.794 ' 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:00.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.794 --rc genhtml_branch_coverage=1 00:05:00.794 --rc genhtml_function_coverage=1 00:05:00.794 --rc genhtml_legend=1 00:05:00.794 --rc geninfo_all_blocks=1 00:05:00.794 --rc geninfo_unexecuted_blocks=1 00:05:00.794 00:05:00.794 ' 00:05:00.794 08:55:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:00.794 08:55:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3204548 00:05:00.794 08:55:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:00.794 08:55:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3204548 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3204548 ']' 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.794 08:55:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.794 [2024-11-20 08:55:37.669100] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:00.794 [2024-11-20 08:55:37.669190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204548 ] 00:05:00.794 [2024-11-20 08:55:37.733693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.794 [2024-11-20 08:55:37.792191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.053 08:55:38 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:01.053 08:55:38 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:01.053 08:55:38 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:01.679 08:55:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3204548 00:05:01.679 08:55:38 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3204548 ']' 00:05:01.679 08:55:38 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3204548 00:05:01.679 08:55:38 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:01.679 08:55:38 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:01.679 08:55:38 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3204548 00:05:01.679 08:55:38 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:01.679 08:55:38 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:01.679 08:55:38 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3204548' 00:05:01.680 killing process with pid 3204548 00:05:01.680 08:55:38 alias_rpc -- common/autotest_common.sh@971 -- # kill 3204548 00:05:01.680 08:55:38 alias_rpc -- common/autotest_common.sh@976 -- # wait 3204548 00:05:01.938 00:05:01.938 real 0m1.326s 00:05:01.938 user 0m1.444s 00:05:01.938 sys 0m0.446s 00:05:01.938 08:55:38 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.938 08:55:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.938 ************************************ 00:05:01.938 END TEST alias_rpc 00:05:01.938 ************************************ 00:05:01.938 08:55:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:01.938 08:55:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.938 08:55:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.938 08:55:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.938 08:55:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.938 ************************************ 00:05:01.938 START TEST spdkcli_tcp 00:05:01.938 ************************************ 00:05:01.938 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.938 * Looking for test storage... 00:05:01.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:01.938 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.938 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.938 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:02.197 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.197 08:55:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:02.197 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.197 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:02.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.197 --rc genhtml_branch_coverage=1 00:05:02.197 --rc genhtml_function_coverage=1 00:05:02.197 --rc genhtml_legend=1 00:05:02.197 --rc geninfo_all_blocks=1 00:05:02.198 --rc geninfo_unexecuted_blocks=1 00:05:02.198 00:05:02.198 ' 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:02.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.198 --rc genhtml_branch_coverage=1 00:05:02.198 --rc genhtml_function_coverage=1 00:05:02.198 --rc genhtml_legend=1 00:05:02.198 --rc geninfo_all_blocks=1 00:05:02.198 --rc geninfo_unexecuted_blocks=1 00:05:02.198 00:05:02.198 ' 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:02.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.198 --rc genhtml_branch_coverage=1 00:05:02.198 --rc genhtml_function_coverage=1 00:05:02.198 --rc genhtml_legend=1 00:05:02.198 --rc geninfo_all_blocks=1 00:05:02.198 --rc geninfo_unexecuted_blocks=1 00:05:02.198 00:05:02.198 ' 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:02.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.198 --rc genhtml_branch_coverage=1 00:05:02.198 --rc genhtml_function_coverage=1 00:05:02.198 --rc genhtml_legend=1 00:05:02.198 --rc geninfo_all_blocks=1 00:05:02.198 --rc geninfo_unexecuted_blocks=1 00:05:02.198 00:05:02.198 ' 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3204751 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:02.198 08:55:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3204751 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3204751 ']' 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:02.198 08:55:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.198 [2024-11-20 08:55:39.045230] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:02.198 [2024-11-20 08:55:39.045330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204751 ] 00:05:02.198 [2024-11-20 08:55:39.110293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.198 [2024-11-20 08:55:39.170034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.198 [2024-11-20 08:55:39.170039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.455 08:55:39 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.455 08:55:39 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:02.455 08:55:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3204870 00:05:02.455 08:55:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:02.455 08:55:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:02.713 [ 00:05:02.713 "bdev_malloc_delete", 00:05:02.713 "bdev_malloc_create", 00:05:02.713 "bdev_null_resize", 00:05:02.713 "bdev_null_delete", 00:05:02.713 "bdev_null_create", 00:05:02.713 "bdev_nvme_cuse_unregister", 00:05:02.713 "bdev_nvme_cuse_register", 00:05:02.713 "bdev_opal_new_user", 00:05:02.713 "bdev_opal_set_lock_state", 00:05:02.713 "bdev_opal_delete", 00:05:02.713 "bdev_opal_get_info", 00:05:02.713 "bdev_opal_create", 00:05:02.713 "bdev_nvme_opal_revert", 00:05:02.713 "bdev_nvme_opal_init", 00:05:02.713 "bdev_nvme_send_cmd", 00:05:02.713 "bdev_nvme_set_keys", 00:05:02.713 "bdev_nvme_get_path_iostat", 00:05:02.713 "bdev_nvme_get_mdns_discovery_info", 00:05:02.713 "bdev_nvme_stop_mdns_discovery", 00:05:02.713 "bdev_nvme_start_mdns_discovery", 00:05:02.713 "bdev_nvme_set_multipath_policy", 00:05:02.713 "bdev_nvme_set_preferred_path", 00:05:02.713 "bdev_nvme_get_io_paths", 00:05:02.713 "bdev_nvme_remove_error_injection", 00:05:02.713 "bdev_nvme_add_error_injection", 00:05:02.713 "bdev_nvme_get_discovery_info", 00:05:02.713 "bdev_nvme_stop_discovery", 00:05:02.713 "bdev_nvme_start_discovery", 00:05:02.713 "bdev_nvme_get_controller_health_info", 00:05:02.713 "bdev_nvme_disable_controller", 00:05:02.713 "bdev_nvme_enable_controller", 00:05:02.713 "bdev_nvme_reset_controller", 00:05:02.713 "bdev_nvme_get_transport_statistics", 00:05:02.713 "bdev_nvme_apply_firmware", 00:05:02.713 "bdev_nvme_detach_controller", 00:05:02.713 "bdev_nvme_get_controllers", 00:05:02.713 "bdev_nvme_attach_controller", 00:05:02.713 "bdev_nvme_set_hotplug", 00:05:02.713 "bdev_nvme_set_options", 00:05:02.713 "bdev_passthru_delete", 00:05:02.713 "bdev_passthru_create", 00:05:02.713 "bdev_lvol_set_parent_bdev", 00:05:02.713 "bdev_lvol_set_parent", 00:05:02.713 "bdev_lvol_check_shallow_copy", 00:05:02.713 "bdev_lvol_start_shallow_copy", 00:05:02.713 "bdev_lvol_grow_lvstore", 00:05:02.713 "bdev_lvol_get_lvols", 00:05:02.713 "bdev_lvol_get_lvstores", 00:05:02.713 "bdev_lvol_delete", 00:05:02.713 "bdev_lvol_set_read_only", 00:05:02.713 "bdev_lvol_resize", 00:05:02.713 "bdev_lvol_decouple_parent", 00:05:02.714 "bdev_lvol_inflate", 00:05:02.714 "bdev_lvol_rename", 00:05:02.714 "bdev_lvol_clone_bdev", 00:05:02.714 "bdev_lvol_clone", 00:05:02.714 "bdev_lvol_snapshot", 00:05:02.714 "bdev_lvol_create", 00:05:02.714 "bdev_lvol_delete_lvstore", 00:05:02.714 "bdev_lvol_rename_lvstore", 00:05:02.714 "bdev_lvol_create_lvstore", 00:05:02.714 "bdev_raid_set_options", 00:05:02.714 "bdev_raid_remove_base_bdev", 00:05:02.714 "bdev_raid_add_base_bdev", 00:05:02.714 "bdev_raid_delete", 00:05:02.714 "bdev_raid_create", 00:05:02.714 "bdev_raid_get_bdevs", 00:05:02.714 "bdev_error_inject_error", 00:05:02.714 "bdev_error_delete", 00:05:02.714 "bdev_error_create", 00:05:02.714 "bdev_split_delete", 00:05:02.714 "bdev_split_create", 00:05:02.714 "bdev_delay_delete", 00:05:02.714 "bdev_delay_create", 00:05:02.714 "bdev_delay_update_latency", 00:05:02.714 "bdev_zone_block_delete", 00:05:02.714 "bdev_zone_block_create", 00:05:02.714 "blobfs_create", 00:05:02.714 "blobfs_detect", 00:05:02.714 "blobfs_set_cache_size", 00:05:02.714 "bdev_aio_delete", 00:05:02.714 "bdev_aio_rescan", 00:05:02.714 "bdev_aio_create", 00:05:02.714 "bdev_ftl_set_property", 00:05:02.714 "bdev_ftl_get_properties", 00:05:02.714 "bdev_ftl_get_stats", 00:05:02.714 "bdev_ftl_unmap", 00:05:02.714 "bdev_ftl_unload", 00:05:02.714 "bdev_ftl_delete", 00:05:02.714 "bdev_ftl_load", 00:05:02.714 "bdev_ftl_create", 00:05:02.714 "bdev_virtio_attach_controller", 00:05:02.714 "bdev_virtio_scsi_get_devices", 00:05:02.714 "bdev_virtio_detach_controller", 00:05:02.714 "bdev_virtio_blk_set_hotplug", 00:05:02.714 "bdev_iscsi_delete", 00:05:02.714 "bdev_iscsi_create", 00:05:02.714 "bdev_iscsi_set_options", 00:05:02.714 "accel_error_inject_error", 00:05:02.714 "ioat_scan_accel_module", 00:05:02.714 "dsa_scan_accel_module", 00:05:02.714 "iaa_scan_accel_module", 00:05:02.714 "vfu_virtio_create_fs_endpoint", 00:05:02.714 "vfu_virtio_create_scsi_endpoint", 00:05:02.714 "vfu_virtio_scsi_remove_target", 00:05:02.714 "vfu_virtio_scsi_add_target", 00:05:02.714 "vfu_virtio_create_blk_endpoint", 00:05:02.714 "vfu_virtio_delete_endpoint", 00:05:02.714 "keyring_file_remove_key", 00:05:02.714 "keyring_file_add_key", 00:05:02.714 "keyring_linux_set_options", 00:05:02.714 "fsdev_aio_delete", 00:05:02.714 "fsdev_aio_create", 00:05:02.714 "iscsi_get_histogram", 00:05:02.714 "iscsi_enable_histogram", 00:05:02.714 "iscsi_set_options", 00:05:02.714 "iscsi_get_auth_groups", 00:05:02.714 "iscsi_auth_group_remove_secret", 00:05:02.714 "iscsi_auth_group_add_secret", 00:05:02.714 "iscsi_delete_auth_group", 00:05:02.714 "iscsi_create_auth_group", 00:05:02.714 "iscsi_set_discovery_auth", 00:05:02.714 "iscsi_get_options", 00:05:02.714 "iscsi_target_node_request_logout", 00:05:02.714 "iscsi_target_node_set_redirect", 00:05:02.714 "iscsi_target_node_set_auth", 00:05:02.714 "iscsi_target_node_add_lun", 00:05:02.714 "iscsi_get_stats", 00:05:02.714 "iscsi_get_connections", 00:05:02.714 "iscsi_portal_group_set_auth", 00:05:02.714 "iscsi_start_portal_group", 00:05:02.714 "iscsi_delete_portal_group", 00:05:02.714 "iscsi_create_portal_group", 00:05:02.714 "iscsi_get_portal_groups", 00:05:02.714 "iscsi_delete_target_node", 00:05:02.714 "iscsi_target_node_remove_pg_ig_maps", 00:05:02.714 "iscsi_target_node_add_pg_ig_maps", 00:05:02.714 "iscsi_create_target_node", 00:05:02.714 "iscsi_get_target_nodes", 00:05:02.714 "iscsi_delete_initiator_group", 00:05:02.714 "iscsi_initiator_group_remove_initiators", 00:05:02.714 "iscsi_initiator_group_add_initiators", 00:05:02.714 "iscsi_create_initiator_group", 00:05:02.714 "iscsi_get_initiator_groups", 00:05:02.714 "nvmf_set_crdt", 00:05:02.714 "nvmf_set_config", 00:05:02.714 "nvmf_set_max_subsystems", 00:05:02.714 "nvmf_stop_mdns_prr", 00:05:02.714 "nvmf_publish_mdns_prr", 00:05:02.714 "nvmf_subsystem_get_listeners", 00:05:02.714 "nvmf_subsystem_get_qpairs", 00:05:02.714 "nvmf_subsystem_get_controllers", 00:05:02.714 "nvmf_get_stats", 00:05:02.714 "nvmf_get_transports", 00:05:02.714 "nvmf_create_transport", 00:05:02.714 "nvmf_get_targets", 00:05:02.714 "nvmf_delete_target", 00:05:02.714 "nvmf_create_target", 00:05:02.714 "nvmf_subsystem_allow_any_host", 00:05:02.714 "nvmf_subsystem_set_keys", 00:05:02.714 "nvmf_subsystem_remove_host", 00:05:02.714 "nvmf_subsystem_add_host", 00:05:02.714 "nvmf_ns_remove_host", 00:05:02.714 "nvmf_ns_add_host", 00:05:02.714 "nvmf_subsystem_remove_ns", 00:05:02.714 "nvmf_subsystem_set_ns_ana_group", 00:05:02.714 "nvmf_subsystem_add_ns", 00:05:02.714 "nvmf_subsystem_listener_set_ana_state", 00:05:02.714 "nvmf_discovery_get_referrals", 00:05:02.714 "nvmf_discovery_remove_referral", 00:05:02.714 "nvmf_discovery_add_referral", 00:05:02.714 "nvmf_subsystem_remove_listener", 00:05:02.714 "nvmf_subsystem_add_listener", 00:05:02.714 "nvmf_delete_subsystem", 00:05:02.714 "nvmf_create_subsystem", 00:05:02.714 "nvmf_get_subsystems", 00:05:02.714 "env_dpdk_get_mem_stats", 00:05:02.714 "nbd_get_disks", 00:05:02.714 "nbd_stop_disk", 00:05:02.714 "nbd_start_disk", 00:05:02.714 "ublk_recover_disk", 00:05:02.714 "ublk_get_disks", 00:05:02.714 "ublk_stop_disk", 00:05:02.714 "ublk_start_disk", 00:05:02.714 "ublk_destroy_target", 00:05:02.714 "ublk_create_target", 00:05:02.714 "virtio_blk_create_transport", 00:05:02.714 "virtio_blk_get_transports", 00:05:02.714 "vhost_controller_set_coalescing", 00:05:02.714 "vhost_get_controllers", 00:05:02.714 "vhost_delete_controller", 00:05:02.714 "vhost_create_blk_controller", 00:05:02.714 "vhost_scsi_controller_remove_target", 00:05:02.714 "vhost_scsi_controller_add_target", 00:05:02.714 "vhost_start_scsi_controller", 00:05:02.714 "vhost_create_scsi_controller", 00:05:02.714 "thread_set_cpumask", 00:05:02.714 "scheduler_set_options", 00:05:02.714 "framework_get_governor", 00:05:02.714 "framework_get_scheduler", 00:05:02.714 "framework_set_scheduler", 00:05:02.714 "framework_get_reactors", 00:05:02.714 "thread_get_io_channels", 00:05:02.714 "thread_get_pollers", 00:05:02.714 "thread_get_stats", 00:05:02.714 "framework_monitor_context_switch", 00:05:02.714 "spdk_kill_instance", 00:05:02.714 "log_enable_timestamps", 00:05:02.714 "log_get_flags", 00:05:02.714 "log_clear_flag", 00:05:02.714 "log_set_flag", 00:05:02.714 "log_get_level", 00:05:02.714 "log_set_level", 00:05:02.714 "log_get_print_level", 00:05:02.714 "log_set_print_level", 00:05:02.714 "framework_enable_cpumask_locks", 00:05:02.714 "framework_disable_cpumask_locks", 00:05:02.714 "framework_wait_init", 00:05:02.714 "framework_start_init", 00:05:02.714 "scsi_get_devices", 00:05:02.714 "bdev_get_histogram", 00:05:02.714 "bdev_enable_histogram", 00:05:02.714 "bdev_set_qos_limit", 00:05:02.714 "bdev_set_qd_sampling_period", 00:05:02.714 "bdev_get_bdevs", 00:05:02.714 "bdev_reset_iostat", 00:05:02.714 "bdev_get_iostat", 00:05:02.714 "bdev_examine", 00:05:02.714 "bdev_wait_for_examine", 00:05:02.714 "bdev_set_options", 00:05:02.714 "accel_get_stats", 00:05:02.714 "accel_set_options", 00:05:02.714 "accel_set_driver", 00:05:02.714 "accel_crypto_key_destroy", 00:05:02.714 "accel_crypto_keys_get", 00:05:02.714 "accel_crypto_key_create", 00:05:02.714 "accel_assign_opc", 00:05:02.714 "accel_get_module_info", 00:05:02.714 "accel_get_opc_assignments", 00:05:02.714 "vmd_rescan", 00:05:02.714 "vmd_remove_device", 00:05:02.714 "vmd_enable", 00:05:02.714 "sock_get_default_impl", 00:05:02.714 "sock_set_default_impl", 00:05:02.714 "sock_impl_set_options", 00:05:02.714 "sock_impl_get_options", 00:05:02.714 "iobuf_get_stats", 00:05:02.714 "iobuf_set_options", 00:05:02.714 "keyring_get_keys", 00:05:02.714 "vfu_tgt_set_base_path", 00:05:02.714 "framework_get_pci_devices", 00:05:02.714 "framework_get_config", 00:05:02.714 "framework_get_subsystems", 00:05:02.714 "fsdev_set_opts", 00:05:02.714 "fsdev_get_opts", 00:05:02.714 "trace_get_info", 00:05:02.714 "trace_get_tpoint_group_mask", 00:05:02.714 "trace_disable_tpoint_group", 00:05:02.714 "trace_enable_tpoint_group", 00:05:02.714 "trace_clear_tpoint_mask", 00:05:02.714 "trace_set_tpoint_mask", 00:05:02.714 "notify_get_notifications", 00:05:02.714 "notify_get_types", 00:05:02.714 "spdk_get_version", 00:05:02.714 "rpc_get_methods" 00:05:02.714 ] 00:05:02.714 08:55:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:02.714 08:55:39 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.714 08:55:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.714 08:55:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:02.714 08:55:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3204751 00:05:02.714 08:55:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3204751 ']' 00:05:02.714 08:55:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3204751 00:05:02.714 08:55:39 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:02.714 08:55:39 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:02.714 08:55:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3204751 00:05:02.972 08:55:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:02.972 08:55:39 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:02.972 08:55:39 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3204751' 00:05:02.972 killing process with pid 3204751 00:05:02.972 08:55:39 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3204751 00:05:02.972 08:55:39 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3204751 00:05:03.230 00:05:03.230 real 0m1.357s 00:05:03.230 user 0m2.435s 00:05:03.230 sys 0m0.457s 00:05:03.230 08:55:40 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.230 08:55:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.230 ************************************ 00:05:03.230 END TEST spdkcli_tcp 00:05:03.230 ************************************ 00:05:03.230 08:55:40 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.230 08:55:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.230 08:55:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.230 08:55:40 -- common/autotest_common.sh@10 -- # set +x 00:05:03.489 ************************************ 00:05:03.489 START TEST dpdk_mem_utility 00:05:03.489 ************************************ 00:05:03.489 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.489 * Looking for test storage... 00:05:03.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:03.489 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.489 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.489 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.489 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.489 08:55:40 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.490 08:55:40 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.490 08:55:40 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.490 --rc genhtml_branch_coverage=1 00:05:03.490 --rc genhtml_function_coverage=1 00:05:03.490 --rc genhtml_legend=1 00:05:03.490 --rc geninfo_all_blocks=1 00:05:03.490 --rc geninfo_unexecuted_blocks=1 00:05:03.490 00:05:03.490 ' 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.490 --rc genhtml_branch_coverage=1 00:05:03.490 --rc genhtml_function_coverage=1 00:05:03.490 --rc genhtml_legend=1 00:05:03.490 --rc geninfo_all_blocks=1 00:05:03.490 --rc geninfo_unexecuted_blocks=1 00:05:03.490 00:05:03.490 ' 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.490 --rc genhtml_branch_coverage=1 00:05:03.490 --rc genhtml_function_coverage=1 00:05:03.490 --rc genhtml_legend=1 00:05:03.490 --rc geninfo_all_blocks=1 00:05:03.490 --rc geninfo_unexecuted_blocks=1 00:05:03.490 00:05:03.490 ' 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.490 --rc genhtml_branch_coverage=1 00:05:03.490 --rc genhtml_function_coverage=1 00:05:03.490 --rc genhtml_legend=1 00:05:03.490 --rc geninfo_all_blocks=1 00:05:03.490 --rc geninfo_unexecuted_blocks=1 00:05:03.490 00:05:03.490 ' 00:05:03.490 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.490 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3204977 00:05:03.490 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.490 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3204977 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3204977 ']' 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.490 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.490 [2024-11-20 08:55:40.464365] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:03.490 [2024-11-20 08:55:40.464449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204977 ] 00:05:03.748 [2024-11-20 08:55:40.533009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.748 [2024-11-20 08:55:40.591593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.006 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.006 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:04.006 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.006 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.006 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.006 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.006 { 00:05:04.006 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.006 } 00:05:04.006 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.006 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:04.006 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:04.006 1 heaps totaling size 810.000000 MiB 00:05:04.006 size: 810.000000 MiB heap id: 0 00:05:04.006 end heaps---------- 00:05:04.006 9 mempools totaling size 595.772034 MiB 00:05:04.006 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.006 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.006 size: 92.545471 MiB name: bdev_io_3204977 00:05:04.006 size: 50.003479 MiB name: msgpool_3204977 00:05:04.006 size: 36.509338 MiB name: fsdev_io_3204977 00:05:04.006 size: 21.763794 MiB name: PDU_Pool 00:05:04.006 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.006 size: 4.133484 MiB name: evtpool_3204977 00:05:04.006 size: 0.026123 MiB name: Session_Pool 00:05:04.006 end mempools------- 00:05:04.006 6 memzones totaling size 4.142822 MiB 00:05:04.006 size: 1.000366 MiB name: RG_ring_0_3204977 00:05:04.006 size: 1.000366 MiB name: RG_ring_1_3204977 00:05:04.006 size: 1.000366 MiB name: RG_ring_4_3204977 00:05:04.006 size: 1.000366 MiB name: RG_ring_5_3204977 00:05:04.006 size: 0.125366 MiB name: RG_ring_2_3204977 00:05:04.006 size: 0.015991 MiB name: RG_ring_3_3204977 00:05:04.006 end memzones------- 00:05:04.006 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.006 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:04.006 list of free elements. size: 10.862488 MiB 00:05:04.006 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:04.006 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:04.006 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:04.006 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:04.006 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:04.006 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:04.006 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:04.006 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:04.006 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:04.006 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:04.006 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:04.006 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:04.006 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:04.006 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:04.006 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:04.007 list of standard malloc elements. size: 199.218628 MiB 00:05:04.007 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:04.007 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:04.007 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:04.007 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:04.007 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:04.007 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:04.007 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:04.007 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:04.007 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:04.007 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:04.007 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:04.007 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:04.007 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:04.007 list of memzone associated elements. size: 599.918884 MiB 00:05:04.007 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:04.007 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.007 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:04.007 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.007 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:04.007 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3204977_0 00:05:04.007 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:04.007 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3204977_0 00:05:04.007 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:04.007 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3204977_0 00:05:04.007 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:04.007 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.007 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:04.007 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.007 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:04.007 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3204977_0 00:05:04.007 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:04.007 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3204977 00:05:04.007 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:04.007 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3204977 00:05:04.007 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:04.007 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.007 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:04.007 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.007 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:04.007 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.007 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:04.007 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.007 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:04.007 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3204977 00:05:04.007 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:04.007 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3204977 00:05:04.007 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:04.007 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3204977 00:05:04.007 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:04.007 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3204977 00:05:04.007 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:04.007 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3204977 00:05:04.007 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:04.007 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3204977 00:05:04.007 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:04.007 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.007 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:04.007 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.007 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:04.007 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.007 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:04.007 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3204977 00:05:04.007 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:04.007 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3204977 00:05:04.007 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:04.007 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.007 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:04.007 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.007 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:04.007 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3204977 00:05:04.007 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:04.007 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.007 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:04.007 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3204977 00:05:04.007 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:04.007 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3204977 00:05:04.007 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:04.007 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3204977 00:05:04.007 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:04.007 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.007 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.007 08:55:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3204977 00:05:04.007 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3204977 ']' 00:05:04.007 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3204977 00:05:04.007 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:04.007 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:04.007 08:55:40 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3204977 00:05:04.007 08:55:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:04.007 08:55:41 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:04.007 08:55:41 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3204977' 00:05:04.007 killing process with pid 3204977 00:05:04.007 08:55:41 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3204977 00:05:04.007 08:55:41 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3204977 00:05:04.574 00:05:04.574 real 0m1.181s 00:05:04.574 user 0m1.172s 00:05:04.574 sys 0m0.425s 00:05:04.574 08:55:41 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:04.574 08:55:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.574 ************************************ 00:05:04.574 END TEST dpdk_mem_utility 00:05:04.574 ************************************ 00:05:04.574 08:55:41 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:04.574 08:55:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:04.574 08:55:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.574 08:55:41 -- common/autotest_common.sh@10 -- # set +x 00:05:04.574 ************************************ 00:05:04.574 START TEST event 00:05:04.574 ************************************ 00:05:04.574 08:55:41 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:04.574 * Looking for test storage... 00:05:04.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:04.574 08:55:41 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:04.574 08:55:41 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:04.574 08:55:41 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:04.833 08:55:41 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:04.833 08:55:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.833 08:55:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.833 08:55:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.833 08:55:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.833 08:55:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.833 08:55:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.833 08:55:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.833 08:55:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.833 08:55:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.833 08:55:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.833 08:55:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.833 08:55:41 event -- scripts/common.sh@344 -- # case "$op" in 00:05:04.833 08:55:41 event -- scripts/common.sh@345 -- # : 1 00:05:04.833 08:55:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.833 08:55:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.833 08:55:41 event -- scripts/common.sh@365 -- # decimal 1 00:05:04.833 08:55:41 event -- scripts/common.sh@353 -- # local d=1 00:05:04.833 08:55:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.833 08:55:41 event -- scripts/common.sh@355 -- # echo 1 00:05:04.833 08:55:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.833 08:55:41 event -- scripts/common.sh@366 -- # decimal 2 00:05:04.833 08:55:41 event -- scripts/common.sh@353 -- # local d=2 00:05:04.833 08:55:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.833 08:55:41 event -- scripts/common.sh@355 -- # echo 2 00:05:04.833 08:55:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.834 08:55:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.834 08:55:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.834 08:55:41 event -- scripts/common.sh@368 -- # return 0 00:05:04.834 08:55:41 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.834 08:55:41 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.834 --rc genhtml_branch_coverage=1 00:05:04.834 --rc genhtml_function_coverage=1 00:05:04.834 --rc genhtml_legend=1 00:05:04.834 --rc geninfo_all_blocks=1 00:05:04.834 --rc geninfo_unexecuted_blocks=1 00:05:04.834 00:05:04.834 ' 00:05:04.834 08:55:41 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.834 --rc genhtml_branch_coverage=1 00:05:04.834 --rc genhtml_function_coverage=1 00:05:04.834 --rc genhtml_legend=1 00:05:04.834 --rc geninfo_all_blocks=1 00:05:04.834 --rc geninfo_unexecuted_blocks=1 00:05:04.834 00:05:04.834 ' 00:05:04.834 08:55:41 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.834 --rc genhtml_branch_coverage=1 00:05:04.834 --rc genhtml_function_coverage=1 00:05:04.834 --rc genhtml_legend=1 00:05:04.834 --rc geninfo_all_blocks=1 00:05:04.834 --rc geninfo_unexecuted_blocks=1 00:05:04.834 00:05:04.834 ' 00:05:04.834 08:55:41 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.834 --rc genhtml_branch_coverage=1 00:05:04.834 --rc genhtml_function_coverage=1 00:05:04.834 --rc genhtml_legend=1 00:05:04.834 --rc geninfo_all_blocks=1 00:05:04.834 --rc geninfo_unexecuted_blocks=1 00:05:04.834 00:05:04.834 ' 00:05:04.834 08:55:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:04.834 08:55:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.834 08:55:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.834 08:55:41 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:04.834 08:55:41 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.834 08:55:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.834 ************************************ 00:05:04.834 START TEST event_perf 00:05:04.834 ************************************ 00:05:04.834 08:55:41 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.834 Running I/O for 1 seconds...[2024-11-20 08:55:41.677300] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:04.834 [2024-11-20 08:55:41.677400] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205272 ] 00:05:04.834 [2024-11-20 08:55:41.746802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.834 [2024-11-20 08:55:41.808861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.834 [2024-11-20 08:55:41.808968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.834 [2024-11-20 08:55:41.809043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.834 [2024-11-20 08:55:41.809046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.208 Running I/O for 1 seconds... 00:05:06.208 lcore 0: 222845 00:05:06.208 lcore 1: 222844 00:05:06.208 lcore 2: 222844 00:05:06.208 lcore 3: 222844 00:05:06.208 done. 00:05:06.208 00:05:06.208 real 0m1.210s 00:05:06.208 user 0m4.129s 00:05:06.208 sys 0m0.075s 00:05:06.208 08:55:42 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.208 08:55:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.208 ************************************ 00:05:06.208 END TEST event_perf 00:05:06.208 ************************************ 00:05:06.208 08:55:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.208 08:55:42 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:06.208 08:55:42 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.208 08:55:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.208 ************************************ 00:05:06.208 START TEST event_reactor 00:05:06.208 ************************************ 00:05:06.208 08:55:42 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:06.208 [2024-11-20 08:55:42.931550] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:06.208 [2024-11-20 08:55:42.931625] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205437 ] 00:05:06.208 [2024-11-20 08:55:42.997092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.208 [2024-11-20 08:55:43.053539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.141 test_start 00:05:07.141 oneshot 00:05:07.141 tick 100 00:05:07.141 tick 100 00:05:07.141 tick 250 00:05:07.141 tick 100 00:05:07.141 tick 100 00:05:07.141 tick 100 00:05:07.141 tick 250 00:05:07.141 tick 500 00:05:07.141 tick 100 00:05:07.141 tick 100 00:05:07.141 tick 250 00:05:07.141 tick 100 00:05:07.141 tick 100 00:05:07.141 test_end 00:05:07.141 00:05:07.141 real 0m1.198s 00:05:07.141 user 0m1.129s 00:05:07.141 sys 0m0.065s 00:05:07.141 08:55:44 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.141 08:55:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:07.141 ************************************ 00:05:07.141 END TEST event_reactor 00:05:07.141 ************************************ 00:05:07.141 08:55:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.141 08:55:44 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:07.141 08:55:44 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.141 08:55:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.399 ************************************ 00:05:07.399 START TEST event_reactor_perf 00:05:07.399 ************************************ 00:05:07.399 08:55:44 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.399 [2024-11-20 08:55:44.187125] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:07.399 [2024-11-20 08:55:44.187194] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205592 ] 00:05:07.399 [2024-11-20 08:55:44.253988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.399 [2024-11-20 08:55:44.309698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.773 test_start 00:05:08.773 test_end 00:05:08.773 Performance: 449391 events per second 00:05:08.773 00:05:08.773 real 0m1.200s 00:05:08.773 user 0m1.133s 00:05:08.773 sys 0m0.063s 00:05:08.773 08:55:45 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.773 08:55:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.773 ************************************ 00:05:08.773 END TEST event_reactor_perf 00:05:08.773 ************************************ 00:05:08.773 08:55:45 event -- event/event.sh@49 -- # uname -s 00:05:08.773 08:55:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.773 08:55:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.773 08:55:45 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.773 08:55:45 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.773 08:55:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.773 ************************************ 00:05:08.773 START TEST event_scheduler 00:05:08.773 ************************************ 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.773 * Looking for test storage... 00:05:08.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.773 08:55:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.773 --rc genhtml_branch_coverage=1 00:05:08.773 --rc genhtml_function_coverage=1 00:05:08.773 --rc genhtml_legend=1 00:05:08.773 --rc geninfo_all_blocks=1 00:05:08.773 --rc geninfo_unexecuted_blocks=1 00:05:08.773 00:05:08.773 ' 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.773 --rc genhtml_branch_coverage=1 00:05:08.773 --rc genhtml_function_coverage=1 00:05:08.773 --rc genhtml_legend=1 00:05:08.773 --rc geninfo_all_blocks=1 00:05:08.773 --rc geninfo_unexecuted_blocks=1 00:05:08.773 00:05:08.773 ' 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.773 --rc genhtml_branch_coverage=1 00:05:08.773 --rc genhtml_function_coverage=1 00:05:08.773 --rc genhtml_legend=1 00:05:08.773 --rc geninfo_all_blocks=1 00:05:08.773 --rc geninfo_unexecuted_blocks=1 00:05:08.773 00:05:08.773 ' 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.773 --rc genhtml_branch_coverage=1 00:05:08.773 --rc genhtml_function_coverage=1 00:05:08.773 --rc genhtml_legend=1 00:05:08.773 --rc geninfo_all_blocks=1 00:05:08.773 --rc geninfo_unexecuted_blocks=1 00:05:08.773 00:05:08.773 ' 00:05:08.773 08:55:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.773 08:55:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3205783 00:05:08.773 08:55:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.773 08:55:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.773 08:55:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3205783 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3205783 ']' 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:08.773 08:55:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.773 [2024-11-20 08:55:45.610258] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:08.773 [2024-11-20 08:55:45.610371] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205783 ] 00:05:08.773 [2024-11-20 08:55:45.674290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.773 [2024-11-20 08:55:45.736318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.773 [2024-11-20 08:55:45.736375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.773 [2024-11-20 08:55:45.736440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.773 [2024-11-20 08:55:45.736444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:09.032 08:55:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 [2024-11-20 08:55:45.837310] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:09.032 [2024-11-20 08:55:45.837338] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.032 [2024-11-20 08:55:45.837355] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.032 [2024-11-20 08:55:45.837367] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.032 [2024-11-20 08:55:45.837378] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.032 08:55:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 [2024-11-20 08:55:45.939133] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.032 08:55:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.032 08:55:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 ************************************ 00:05:09.032 START TEST scheduler_create_thread 00:05:09.032 ************************************ 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 2 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 3 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 4 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 5 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 6 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 7 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.032 8 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.032 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.033 9 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.033 10 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.033 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.290 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.290 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.290 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.290 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.290 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.290 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.290 08:55:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.290 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.290 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.548 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.548 00:05:09.548 real 0m0.591s 00:05:09.548 user 0m0.013s 00:05:09.548 sys 0m0.001s 00:05:09.548 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.548 08:55:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.548 ************************************ 00:05:09.548 END TEST scheduler_create_thread 00:05:09.548 ************************************ 00:05:09.806 08:55:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:09.806 08:55:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3205783 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3205783 ']' 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3205783 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3205783 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3205783' 00:05:09.806 killing process with pid 3205783 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3205783 00:05:09.806 08:55:46 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3205783 00:05:10.064 [2024-11-20 08:55:47.039300] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.322 00:05:10.322 real 0m1.831s 00:05:10.322 user 0m2.491s 00:05:10.322 sys 0m0.331s 00:05:10.322 08:55:47 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.322 08:55:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.322 ************************************ 00:05:10.322 END TEST event_scheduler 00:05:10.322 ************************************ 00:05:10.322 08:55:47 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.322 08:55:47 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.322 08:55:47 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.322 08:55:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.322 08:55:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.322 ************************************ 00:05:10.322 START TEST app_repeat 00:05:10.322 ************************************ 00:05:10.322 08:55:47 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3206094 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3206094' 00:05:10.322 Process app_repeat pid: 3206094 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.322 spdk_app_start Round 0 00:05:10.322 08:55:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3206094 /var/tmp/spdk-nbd.sock 00:05:10.322 08:55:47 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3206094 ']' 00:05:10.322 08:55:47 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.322 08:55:47 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.322 08:55:47 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.322 08:55:47 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.322 08:55:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.323 [2024-11-20 08:55:47.329926] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:10.323 [2024-11-20 08:55:47.329995] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206094 ] 00:05:10.580 [2024-11-20 08:55:47.396569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.581 [2024-11-20 08:55:47.461328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.581 [2024-11-20 08:55:47.461338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.581 08:55:47 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.581 08:55:47 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:10.581 08:55:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.146 Malloc0 00:05:11.146 08:55:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.146 Malloc1 00:05:11.404 08:55:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.404 08:55:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.661 /dev/nbd0 00:05:11.661 08:55:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.661 08:55:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.661 1+0 records in 00:05:11.661 1+0 records out 00:05:11.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213484 s, 19.2 MB/s 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:11.661 08:55:48 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:11.661 08:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.661 08:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.661 08:55:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.918 /dev/nbd1 00:05:11.918 08:55:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.918 08:55:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.918 1+0 records in 00:05:11.918 1+0 records out 00:05:11.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189513 s, 21.6 MB/s 00:05:11.918 08:55:48 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.919 08:55:48 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:11.919 08:55:48 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.919 08:55:48 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:11.919 08:55:48 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:11.919 08:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.919 08:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.919 08:55:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.919 08:55:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.919 08:55:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.176 { 00:05:12.176 "nbd_device": "/dev/nbd0", 00:05:12.176 "bdev_name": "Malloc0" 00:05:12.176 }, 00:05:12.176 { 00:05:12.176 "nbd_device": "/dev/nbd1", 00:05:12.176 "bdev_name": "Malloc1" 00:05:12.176 } 00:05:12.176 ]' 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.176 { 00:05:12.176 "nbd_device": "/dev/nbd0", 00:05:12.176 "bdev_name": "Malloc0" 00:05:12.176 }, 00:05:12.176 { 00:05:12.176 "nbd_device": "/dev/nbd1", 00:05:12.176 "bdev_name": "Malloc1" 00:05:12.176 } 00:05:12.176 ]' 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.176 /dev/nbd1' 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.176 /dev/nbd1' 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.176 08:55:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.177 08:55:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.177 256+0 records in 00:05:12.177 256+0 records out 00:05:12.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501682 s, 209 MB/s 00:05:12.177 08:55:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.177 08:55:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.177 256+0 records in 00:05:12.177 256+0 records out 00:05:12.177 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202398 s, 51.8 MB/s 00:05:12.177 08:55:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.177 08:55:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.434 256+0 records in 00:05:12.434 256+0 records out 00:05:12.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219148 s, 47.8 MB/s 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.434 08:55:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.691 08:55:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.948 08:55:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.206 08:55:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.206 08:55:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.463 08:55:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.720 [2024-11-20 08:55:50.693518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.978 [2024-11-20 08:55:50.749371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.978 [2024-11-20 08:55:50.749371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.978 [2024-11-20 08:55:50.809100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.978 [2024-11-20 08:55:50.809163] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.505 08:55:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.505 08:55:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:16.505 spdk_app_start Round 1 00:05:16.505 08:55:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3206094 /var/tmp/spdk-nbd.sock 00:05:16.505 08:55:53 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3206094 ']' 00:05:16.505 08:55:53 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.505 08:55:53 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.505 08:55:53 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.505 08:55:53 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.505 08:55:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.762 08:55:53 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.762 08:55:53 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:16.762 08:55:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.020 Malloc0 00:05:17.020 08:55:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.277 Malloc1 00:05:17.557 08:55:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.557 08:55:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.814 /dev/nbd0 00:05:17.814 08:55:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.814 08:55:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.814 1+0 records in 00:05:17.814 1+0 records out 00:05:17.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284576 s, 14.4 MB/s 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:17.814 08:55:54 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:17.814 08:55:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.814 08:55:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.814 08:55:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.072 /dev/nbd1 00:05:18.072 08:55:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.072 08:55:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.072 1+0 records in 00:05:18.072 1+0 records out 00:05:18.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154042 s, 26.6 MB/s 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:18.072 08:55:54 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:18.072 08:55:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.072 08:55:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.072 08:55:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.072 08:55:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.072 08:55:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.328 { 00:05:18.328 "nbd_device": "/dev/nbd0", 00:05:18.328 "bdev_name": "Malloc0" 00:05:18.328 }, 00:05:18.328 { 00:05:18.328 "nbd_device": "/dev/nbd1", 00:05:18.328 "bdev_name": "Malloc1" 00:05:18.328 } 00:05:18.328 ]' 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.328 { 00:05:18.328 "nbd_device": "/dev/nbd0", 00:05:18.328 "bdev_name": "Malloc0" 00:05:18.328 }, 00:05:18.328 { 00:05:18.328 "nbd_device": "/dev/nbd1", 00:05:18.328 "bdev_name": "Malloc1" 00:05:18.328 } 00:05:18.328 ]' 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.328 /dev/nbd1' 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.328 /dev/nbd1' 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.328 256+0 records in 00:05:18.328 256+0 records out 00:05:18.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00388178 s, 270 MB/s 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.328 256+0 records in 00:05:18.328 256+0 records out 00:05:18.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191474 s, 54.8 MB/s 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.328 256+0 records in 00:05:18.328 256+0 records out 00:05:18.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219596 s, 47.8 MB/s 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.328 08:55:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.585 08:55:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.842 08:55:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.842 08:55:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.843 08:55:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.843 08:55:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.843 08:55:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.843 08:55:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.843 08:55:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.843 08:55:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.843 08:55:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.843 08:55:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.100 08:55:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.357 08:55:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.357 08:55:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.614 08:55:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.871 [2024-11-20 08:55:56.778646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.871 [2024-11-20 08:55:56.833209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.871 [2024-11-20 08:55:56.833209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.871 [2024-11-20 08:55:56.890934] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.871 [2024-11-20 08:55:56.890996] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.147 08:55:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.147 08:55:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:23.147 spdk_app_start Round 2 00:05:23.147 08:55:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3206094 /var/tmp/spdk-nbd.sock 00:05:23.147 08:55:59 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3206094 ']' 00:05:23.147 08:55:59 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.147 08:55:59 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.147 08:55:59 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.147 08:55:59 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.147 08:55:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.147 08:55:59 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.147 08:55:59 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:23.147 08:55:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.147 Malloc0 00:05:23.147 08:56:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.405 Malloc1 00:05:23.406 08:56:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.406 08:56:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.971 /dev/nbd0 00:05:23.971 08:56:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.971 08:56:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.971 1+0 records in 00:05:23.971 1+0 records out 00:05:23.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206256 s, 19.9 MB/s 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.971 08:56:00 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:23.971 08:56:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.971 08:56:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.971 08:56:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.228 /dev/nbd1 00:05:24.228 08:56:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.228 08:56:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.228 08:56:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:24.228 08:56:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:24.228 08:56:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:24.228 08:56:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:24.228 08:56:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:24.228 08:56:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:24.228 08:56:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:24.229 08:56:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:24.229 08:56:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.229 1+0 records in 00:05:24.229 1+0 records out 00:05:24.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145296 s, 28.2 MB/s 00:05:24.229 08:56:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.229 08:56:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:24.229 08:56:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.229 08:56:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:24.229 08:56:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:24.229 08:56:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.229 08:56:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.229 08:56:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.229 08:56:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.229 08:56:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.486 08:56:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.486 { 00:05:24.486 "nbd_device": "/dev/nbd0", 00:05:24.486 "bdev_name": "Malloc0" 00:05:24.486 }, 00:05:24.486 { 00:05:24.486 "nbd_device": "/dev/nbd1", 00:05:24.486 "bdev_name": "Malloc1" 00:05:24.486 } 00:05:24.486 ]' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.487 { 00:05:24.487 "nbd_device": "/dev/nbd0", 00:05:24.487 "bdev_name": "Malloc0" 00:05:24.487 }, 00:05:24.487 { 00:05:24.487 "nbd_device": "/dev/nbd1", 00:05:24.487 "bdev_name": "Malloc1" 00:05:24.487 } 00:05:24.487 ]' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.487 /dev/nbd1' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.487 /dev/nbd1' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.487 256+0 records in 00:05:24.487 256+0 records out 00:05:24.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509271 s, 206 MB/s 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.487 256+0 records in 00:05:24.487 256+0 records out 00:05:24.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198306 s, 52.9 MB/s 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.487 256+0 records in 00:05:24.487 256+0 records out 00:05:24.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215438 s, 48.7 MB/s 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.487 08:56:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.745 08:56:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.314 08:56:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.573 08:56:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.573 08:56:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.573 08:56:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.573 08:56:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.573 08:56:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.573 08:56:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.573 08:56:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.573 08:56:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.573 08:56:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.573 08:56:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.832 08:56:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.832 [2024-11-20 08:56:02.856990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.091 [2024-11-20 08:56:02.913153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.091 [2024-11-20 08:56:02.913158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.091 [2024-11-20 08:56:02.973041] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.091 [2024-11-20 08:56:02.973103] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.372 08:56:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3206094 /var/tmp/spdk-nbd.sock 00:05:29.372 08:56:05 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3206094 ']' 00:05:29.372 08:56:05 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.372 08:56:05 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:29.373 08:56:05 event.app_repeat -- event/event.sh@39 -- # killprocess 3206094 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3206094 ']' 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3206094 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3206094 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3206094' 00:05:29.373 killing process with pid 3206094 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3206094 00:05:29.373 08:56:05 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3206094 00:05:29.373 spdk_app_start is called in Round 0. 00:05:29.373 Shutdown signal received, stop current app iteration 00:05:29.373 Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 reinitialization... 00:05:29.373 spdk_app_start is called in Round 1. 00:05:29.373 Shutdown signal received, stop current app iteration 00:05:29.373 Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 reinitialization... 00:05:29.373 spdk_app_start is called in Round 2. 00:05:29.373 Shutdown signal received, stop current app iteration 00:05:29.373 Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 reinitialization... 00:05:29.373 spdk_app_start is called in Round 3. 00:05:29.373 Shutdown signal received, stop current app iteration 00:05:29.373 08:56:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:29.373 08:56:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:29.373 00:05:29.373 real 0m18.841s 00:05:29.373 user 0m41.727s 00:05:29.373 sys 0m3.221s 00:05:29.373 08:56:06 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.373 08:56:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.373 ************************************ 00:05:29.373 END TEST app_repeat 00:05:29.373 ************************************ 00:05:29.373 08:56:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:29.373 08:56:06 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:29.373 08:56:06 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.373 08:56:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.373 08:56:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.373 ************************************ 00:05:29.373 START TEST cpu_locks 00:05:29.373 ************************************ 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:29.373 * Looking for test storage... 00:05:29.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.373 08:56:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:29.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.373 --rc genhtml_branch_coverage=1 00:05:29.373 --rc genhtml_function_coverage=1 00:05:29.373 --rc genhtml_legend=1 00:05:29.373 --rc geninfo_all_blocks=1 00:05:29.373 --rc geninfo_unexecuted_blocks=1 00:05:29.373 00:05:29.373 ' 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:29.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.373 --rc genhtml_branch_coverage=1 00:05:29.373 --rc genhtml_function_coverage=1 00:05:29.373 --rc genhtml_legend=1 00:05:29.373 --rc geninfo_all_blocks=1 00:05:29.373 --rc geninfo_unexecuted_blocks=1 00:05:29.373 00:05:29.373 ' 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:29.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.373 --rc genhtml_branch_coverage=1 00:05:29.373 --rc genhtml_function_coverage=1 00:05:29.373 --rc genhtml_legend=1 00:05:29.373 --rc geninfo_all_blocks=1 00:05:29.373 --rc geninfo_unexecuted_blocks=1 00:05:29.373 00:05:29.373 ' 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:29.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.373 --rc genhtml_branch_coverage=1 00:05:29.373 --rc genhtml_function_coverage=1 00:05:29.373 --rc genhtml_legend=1 00:05:29.373 --rc geninfo_all_blocks=1 00:05:29.373 --rc geninfo_unexecuted_blocks=1 00:05:29.373 00:05:29.373 ' 00:05:29.373 08:56:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:29.373 08:56:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:29.373 08:56:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:29.373 08:56:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.373 08:56:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.373 ************************************ 00:05:29.373 START TEST default_locks 00:05:29.373 ************************************ 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3208584 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3208584 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3208584 ']' 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.373 08:56:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.632 [2024-11-20 08:56:06.428141] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:29.632 [2024-11-20 08:56:06.428228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208584 ] 00:05:29.632 [2024-11-20 08:56:06.492738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.632 [2024-11-20 08:56:06.548663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.890 08:56:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.890 08:56:06 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:29.890 08:56:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3208584 00:05:29.890 08:56:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3208584 00:05:29.890 08:56:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.456 lslocks: write error 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3208584 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3208584 ']' 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3208584 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3208584 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3208584' 00:05:30.456 killing process with pid 3208584 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3208584 00:05:30.456 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3208584 00:05:30.714 08:56:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3208584 00:05:30.714 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:30.714 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3208584 00:05:30.714 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3208584 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3208584 ']' 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3208584) - No such process 00:05:30.715 ERROR: process (pid: 3208584) is no longer running 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.715 00:05:30.715 real 0m1.276s 00:05:30.715 user 0m1.257s 00:05:30.715 sys 0m0.523s 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.715 08:56:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.715 ************************************ 00:05:30.715 END TEST default_locks 00:05:30.715 ************************************ 00:05:30.715 08:56:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:30.715 08:56:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.715 08:56:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.715 08:56:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.715 ************************************ 00:05:30.715 START TEST default_locks_via_rpc 00:05:30.715 ************************************ 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3208748 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3208748 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3208748 ']' 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.715 08:56:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.973 [2024-11-20 08:56:07.763180] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:30.973 [2024-11-20 08:56:07.763271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208748 ] 00:05:30.973 [2024-11-20 08:56:07.829756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.973 [2024-11-20 08:56:07.889850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3208748 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3208748 00:05:31.231 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3208748 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3208748 ']' 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3208748 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3208748 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3208748' 00:05:31.489 killing process with pid 3208748 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3208748 00:05:31.489 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3208748 00:05:32.055 00:05:32.055 real 0m1.102s 00:05:32.055 user 0m1.073s 00:05:32.055 sys 0m0.491s 00:05:32.055 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.055 08:56:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.055 ************************************ 00:05:32.055 END TEST default_locks_via_rpc 00:05:32.055 ************************************ 00:05:32.055 08:56:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:32.055 08:56:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:32.055 08:56:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.055 08:56:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.055 ************************************ 00:05:32.055 START TEST non_locking_app_on_locked_coremask 00:05:32.055 ************************************ 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3208908 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3208908 /var/tmp/spdk.sock 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3208908 ']' 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:32.055 08:56:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.055 [2024-11-20 08:56:08.914533] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:32.055 [2024-11-20 08:56:08.914629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208908 ] 00:05:32.055 [2024-11-20 08:56:08.979659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.055 [2024-11-20 08:56:09.040026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.311 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.311 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:32.311 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3208917 00:05:32.311 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.311 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3208917 /var/tmp/spdk2.sock 00:05:32.311 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3208917 ']' 00:05:32.312 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.312 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:32.312 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.312 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:32.312 08:56:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.569 [2024-11-20 08:56:09.363553] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:32.569 [2024-11-20 08:56:09.363647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208917 ] 00:05:32.569 [2024-11-20 08:56:09.462001] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.569 [2024-11-20 08:56:09.462030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.569 [2024-11-20 08:56:09.579223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.502 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:33.502 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:33.502 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3208908 00:05:33.502 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3208908 00:05:33.502 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.794 lslocks: write error 00:05:33.794 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3208908 00:05:33.794 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3208908 ']' 00:05:33.794 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3208908 00:05:33.794 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:33.794 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:33.794 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3208908 00:05:34.077 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.078 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.078 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3208908' 00:05:34.078 killing process with pid 3208908 00:05:34.078 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3208908 00:05:34.078 08:56:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3208908 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3208917 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3208917 ']' 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3208917 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3208917 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3208917' 00:05:34.660 killing process with pid 3208917 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3208917 00:05:34.660 08:56:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3208917 00:05:35.225 00:05:35.225 real 0m3.230s 00:05:35.225 user 0m3.462s 00:05:35.225 sys 0m1.041s 00:05:35.225 08:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.225 08:56:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.225 ************************************ 00:05:35.225 END TEST non_locking_app_on_locked_coremask 00:05:35.225 ************************************ 00:05:35.225 08:56:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:35.225 08:56:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:35.225 08:56:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.225 08:56:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.225 ************************************ 00:05:35.225 START TEST locking_app_on_unlocked_coremask 00:05:35.225 ************************************ 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3209346 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3209346 /var/tmp/spdk.sock 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3209346 ']' 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.225 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.225 [2024-11-20 08:56:12.198372] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:35.225 [2024-11-20 08:56:12.198471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209346 ] 00:05:35.493 [2024-11-20 08:56:12.264176] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.493 [2024-11-20 08:56:12.264213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.493 [2024-11-20 08:56:12.323913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3209355 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3209355 /var/tmp/spdk2.sock 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3209355 ']' 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.752 08:56:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.752 [2024-11-20 08:56:12.651177] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:35.752 [2024-11-20 08:56:12.651266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209355 ] 00:05:35.752 [2024-11-20 08:56:12.747967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.010 [2024-11-20 08:56:12.865003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.943 08:56:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:36.943 08:56:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:36.943 08:56:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3209355 00:05:36.943 08:56:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3209355 00:05:36.943 08:56:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.201 lslocks: write error 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3209346 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3209346 ']' 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3209346 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3209346 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3209346' 00:05:37.201 killing process with pid 3209346 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3209346 00:05:37.201 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3209346 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3209355 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3209355 ']' 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3209355 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3209355 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3209355' 00:05:38.136 killing process with pid 3209355 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3209355 00:05:38.136 08:56:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3209355 00:05:38.394 00:05:38.394 real 0m3.213s 00:05:38.394 user 0m3.427s 00:05:38.394 sys 0m1.018s 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.394 ************************************ 00:05:38.394 END TEST locking_app_on_unlocked_coremask 00:05:38.394 ************************************ 00:05:38.394 08:56:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:38.394 08:56:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.394 08:56:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.394 08:56:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.394 ************************************ 00:05:38.394 START TEST locking_app_on_locked_coremask 00:05:38.394 ************************************ 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3209786 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3209786 /var/tmp/spdk.sock 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3209786 ']' 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.394 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.653 [2024-11-20 08:56:15.464084] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:38.653 [2024-11-20 08:56:15.464178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209786 ] 00:05:38.653 [2024-11-20 08:56:15.530016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.653 [2024-11-20 08:56:15.589946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3209789 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3209789 /var/tmp/spdk2.sock 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3209789 /var/tmp/spdk2.sock 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3209789 /var/tmp/spdk2.sock 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3209789 ']' 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.911 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.912 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.912 08:56:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.912 [2024-11-20 08:56:15.913982] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:38.912 [2024-11-20 08:56:15.914069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209789 ] 00:05:39.169 [2024-11-20 08:56:16.012072] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3209786 has claimed it. 00:05:39.169 [2024-11-20 08:56:16.012139] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:39.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3209789) - No such process 00:05:39.735 ERROR: process (pid: 3209789) is no longer running 00:05:39.735 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:39.735 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:39.735 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:39.735 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.735 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.735 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.735 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3209786 00:05:39.735 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3209786 00:05:39.735 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.992 lslocks: write error 00:05:39.992 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3209786 00:05:39.992 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3209786 ']' 00:05:39.992 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3209786 00:05:39.992 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:39.992 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:39.992 08:56:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3209786 00:05:39.992 08:56:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:39.992 08:56:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:39.992 08:56:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3209786' 00:05:39.992 killing process with pid 3209786 00:05:39.992 08:56:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3209786 00:05:39.992 08:56:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3209786 00:05:40.558 00:05:40.558 real 0m2.016s 00:05:40.558 user 0m2.209s 00:05:40.558 sys 0m0.662s 00:05:40.558 08:56:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:40.558 08:56:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.558 ************************************ 00:05:40.558 END TEST locking_app_on_locked_coremask 00:05:40.558 ************************************ 00:05:40.559 08:56:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:40.559 08:56:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:40.559 08:56:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:40.559 08:56:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.559 ************************************ 00:05:40.559 START TEST locking_overlapped_coremask 00:05:40.559 ************************************ 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3210028 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3210028 /var/tmp/spdk.sock 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3210028 ']' 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.559 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.559 [2024-11-20 08:56:17.530068] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:40.559 [2024-11-20 08:56:17.530152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210028 ] 00:05:40.817 [2024-11-20 08:56:17.595467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.817 [2024-11-20 08:56:17.652171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.817 [2024-11-20 08:56:17.652236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.817 [2024-11-20 08:56:17.652240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3210091 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3210091 /var/tmp/spdk2.sock 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3210091 /var/tmp/spdk2.sock 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3210091 /var/tmp/spdk2.sock 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3210091 ']' 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:41.075 08:56:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.075 [2024-11-20 08:56:17.991233] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:41.075 [2024-11-20 08:56:17.991356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210091 ] 00:05:41.075 [2024-11-20 08:56:18.095265] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3210028 has claimed it. 00:05:41.075 [2024-11-20 08:56:18.095339] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3210091) - No such process 00:05:42.009 ERROR: process (pid: 3210091) is no longer running 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3210028 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3210028 ']' 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3210028 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3210028 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3210028' 00:05:42.009 killing process with pid 3210028 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3210028 00:05:42.009 08:56:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3210028 00:05:42.268 00:05:42.268 real 0m1.696s 00:05:42.268 user 0m4.729s 00:05:42.268 sys 0m0.462s 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 ************************************ 00:05:42.268 END TEST locking_overlapped_coremask 00:05:42.268 ************************************ 00:05:42.268 08:56:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:42.268 08:56:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:42.268 08:56:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.268 08:56:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 ************************************ 00:05:42.268 START TEST locking_overlapped_coremask_via_rpc 00:05:42.268 ************************************ 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3210255 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3210255 /var/tmp/spdk.sock 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3210255 ']' 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:42.268 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.268 [2024-11-20 08:56:19.278713] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:42.268 [2024-11-20 08:56:19.278815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210255 ] 00:05:42.526 [2024-11-20 08:56:19.343603] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.526 [2024-11-20 08:56:19.343663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.526 [2024-11-20 08:56:19.405760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.526 [2024-11-20 08:56:19.408320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.526 [2024-11-20 08:56:19.408332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3210289 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3210289 /var/tmp/spdk2.sock 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3210289 ']' 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:42.784 08:56:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.785 [2024-11-20 08:56:19.744806] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:42.785 [2024-11-20 08:56:19.744895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210289 ] 00:05:43.043 [2024-11-20 08:56:19.853837] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.043 [2024-11-20 08:56:19.853875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.043 [2024-11-20 08:56:19.979672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.043 [2024-11-20 08:56:19.979734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:43.043 [2024-11-20 08:56:19.979737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.976 [2024-11-20 08:56:20.746411] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3210255 has claimed it. 00:05:43.976 request: 00:05:43.976 { 00:05:43.976 "method": "framework_enable_cpumask_locks", 00:05:43.976 "req_id": 1 00:05:43.976 } 00:05:43.976 Got JSON-RPC error response 00:05:43.976 response: 00:05:43.976 { 00:05:43.976 "code": -32603, 00:05:43.976 "message": "Failed to claim CPU core: 2" 00:05:43.976 } 00:05:43.976 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3210255 /var/tmp/spdk.sock 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3210255 ']' 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:43.977 08:56:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.235 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.235 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:44.235 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3210289 /var/tmp/spdk2.sock 00:05:44.235 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3210289 ']' 00:05:44.235 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.235 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.235 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.235 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.235 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.493 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.493 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:44.493 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:44.493 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.493 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.493 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.493 00:05:44.493 real 0m2.095s 00:05:44.493 user 0m1.153s 00:05:44.493 sys 0m0.184s 00:05:44.493 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.493 08:56:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.493 ************************************ 00:05:44.493 END TEST locking_overlapped_coremask_via_rpc 00:05:44.493 ************************************ 00:05:44.493 08:56:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:44.493 08:56:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3210255 ]] 00:05:44.493 08:56:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3210255 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3210255 ']' 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3210255 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3210255 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3210255' 00:05:44.493 killing process with pid 3210255 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3210255 00:05:44.493 08:56:21 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3210255 00:05:45.057 08:56:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3210289 ]] 00:05:45.058 08:56:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3210289 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3210289 ']' 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3210289 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3210289 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3210289' 00:05:45.058 killing process with pid 3210289 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3210289 00:05:45.058 08:56:21 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3210289 00:05:45.317 08:56:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:45.317 08:56:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:45.317 08:56:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3210255 ]] 00:05:45.317 08:56:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3210255 00:05:45.317 08:56:22 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3210255 ']' 00:05:45.317 08:56:22 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3210255 00:05:45.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3210255) - No such process 00:05:45.317 08:56:22 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3210255 is not found' 00:05:45.317 Process with pid 3210255 is not found 00:05:45.317 08:56:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3210289 ]] 00:05:45.317 08:56:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3210289 00:05:45.317 08:56:22 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3210289 ']' 00:05:45.317 08:56:22 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3210289 00:05:45.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3210289) - No such process 00:05:45.317 08:56:22 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3210289 is not found' 00:05:45.317 Process with pid 3210289 is not found 00:05:45.317 08:56:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:45.317 00:05:45.317 real 0m16.082s 00:05:45.317 user 0m29.181s 00:05:45.317 sys 0m5.361s 00:05:45.317 08:56:22 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.317 08:56:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.317 ************************************ 00:05:45.317 END TEST cpu_locks 00:05:45.317 ************************************ 00:05:45.317 00:05:45.317 real 0m40.817s 00:05:45.317 user 1m20.013s 00:05:45.317 sys 0m9.373s 00:05:45.317 08:56:22 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.317 08:56:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.317 ************************************ 00:05:45.317 END TEST event 00:05:45.317 ************************************ 00:05:45.317 08:56:22 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:45.317 08:56:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.317 08:56:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.317 08:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:45.575 ************************************ 00:05:45.575 START TEST thread 00:05:45.575 ************************************ 00:05:45.575 08:56:22 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:45.576 * Looking for test storage... 00:05:45.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:45.576 08:56:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.576 08:56:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.576 08:56:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.576 08:56:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.576 08:56:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.576 08:56:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.576 08:56:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.576 08:56:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.576 08:56:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.576 08:56:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.576 08:56:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.576 08:56:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:45.576 08:56:22 thread -- scripts/common.sh@345 -- # : 1 00:05:45.576 08:56:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.576 08:56:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.576 08:56:22 thread -- scripts/common.sh@365 -- # decimal 1 00:05:45.576 08:56:22 thread -- scripts/common.sh@353 -- # local d=1 00:05:45.576 08:56:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.576 08:56:22 thread -- scripts/common.sh@355 -- # echo 1 00:05:45.576 08:56:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.576 08:56:22 thread -- scripts/common.sh@366 -- # decimal 2 00:05:45.576 08:56:22 thread -- scripts/common.sh@353 -- # local d=2 00:05:45.576 08:56:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.576 08:56:22 thread -- scripts/common.sh@355 -- # echo 2 00:05:45.576 08:56:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.576 08:56:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.576 08:56:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.576 08:56:22 thread -- scripts/common.sh@368 -- # return 0 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:45.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.576 --rc genhtml_branch_coverage=1 00:05:45.576 --rc genhtml_function_coverage=1 00:05:45.576 --rc genhtml_legend=1 00:05:45.576 --rc geninfo_all_blocks=1 00:05:45.576 --rc geninfo_unexecuted_blocks=1 00:05:45.576 00:05:45.576 ' 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:45.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.576 --rc genhtml_branch_coverage=1 00:05:45.576 --rc genhtml_function_coverage=1 00:05:45.576 --rc genhtml_legend=1 00:05:45.576 --rc geninfo_all_blocks=1 00:05:45.576 --rc geninfo_unexecuted_blocks=1 00:05:45.576 00:05:45.576 ' 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:45.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.576 --rc genhtml_branch_coverage=1 00:05:45.576 --rc genhtml_function_coverage=1 00:05:45.576 --rc genhtml_legend=1 00:05:45.576 --rc geninfo_all_blocks=1 00:05:45.576 --rc geninfo_unexecuted_blocks=1 00:05:45.576 00:05:45.576 ' 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:45.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.576 --rc genhtml_branch_coverage=1 00:05:45.576 --rc genhtml_function_coverage=1 00:05:45.576 --rc genhtml_legend=1 00:05:45.576 --rc geninfo_all_blocks=1 00:05:45.576 --rc geninfo_unexecuted_blocks=1 00:05:45.576 00:05:45.576 ' 00:05:45.576 08:56:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.576 08:56:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.576 ************************************ 00:05:45.576 START TEST thread_poller_perf 00:05:45.576 ************************************ 00:05:45.576 08:56:22 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:45.576 [2024-11-20 08:56:22.548824] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:45.576 [2024-11-20 08:56:22.548889] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210768 ] 00:05:45.834 [2024-11-20 08:56:22.615972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.834 [2024-11-20 08:56:22.671717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.834 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:46.770 [2024-11-20T07:56:23.799Z] ====================================== 00:05:46.770 [2024-11-20T07:56:23.799Z] busy:2706810012 (cyc) 00:05:46.770 [2024-11-20T07:56:23.799Z] total_run_count: 365000 00:05:46.770 [2024-11-20T07:56:23.799Z] tsc_hz: 2700000000 (cyc) 00:05:46.770 [2024-11-20T07:56:23.799Z] ====================================== 00:05:46.770 [2024-11-20T07:56:23.799Z] poller_cost: 7415 (cyc), 2746 (nsec) 00:05:46.770 00:05:46.770 real 0m1.204s 00:05:46.770 user 0m1.137s 00:05:46.770 sys 0m0.062s 00:05:46.770 08:56:23 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.770 08:56:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.770 ************************************ 00:05:46.770 END TEST thread_poller_perf 00:05:46.770 ************************************ 00:05:46.770 08:56:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:46.770 08:56:23 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:46.770 08:56:23 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.770 08:56:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.770 ************************************ 00:05:46.770 START TEST thread_poller_perf 00:05:46.770 ************************************ 00:05:46.770 08:56:23 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:47.028 [2024-11-20 08:56:23.806607] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:47.028 [2024-11-20 08:56:23.806671] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210924 ] 00:05:47.028 [2024-11-20 08:56:23.871038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.028 [2024-11-20 08:56:23.926953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.028 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:47.962 [2024-11-20T07:56:24.991Z] ====================================== 00:05:47.962 [2024-11-20T07:56:24.991Z] busy:2702150637 (cyc) 00:05:47.962 [2024-11-20T07:56:24.991Z] total_run_count: 4852000 00:05:47.962 [2024-11-20T07:56:24.991Z] tsc_hz: 2700000000 (cyc) 00:05:47.962 [2024-11-20T07:56:24.991Z] ====================================== 00:05:47.962 [2024-11-20T07:56:24.991Z] poller_cost: 556 (cyc), 205 (nsec) 00:05:47.962 00:05:47.962 real 0m1.194s 00:05:47.962 user 0m1.122s 00:05:47.962 sys 0m0.067s 00:05:47.962 08:56:24 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.962 08:56:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.962 ************************************ 00:05:47.962 END TEST thread_poller_perf 00:05:47.962 ************************************ 00:05:48.221 08:56:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:48.221 00:05:48.221 real 0m2.649s 00:05:48.221 user 0m2.409s 00:05:48.221 sys 0m0.245s 00:05:48.221 08:56:25 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:48.221 08:56:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.221 ************************************ 00:05:48.221 END TEST thread 00:05:48.221 ************************************ 00:05:48.221 08:56:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:48.221 08:56:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:48.221 08:56:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:48.221 08:56:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:48.221 08:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:48.221 ************************************ 00:05:48.221 START TEST app_cmdline 00:05:48.221 ************************************ 00:05:48.221 08:56:25 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:48.221 * Looking for test storage... 00:05:48.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:48.221 08:56:25 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:48.221 08:56:25 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:48.221 08:56:25 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.221 08:56:25 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:48.221 08:56:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.222 08:56:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:48.222 08:56:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.222 08:56:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.222 08:56:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.222 08:56:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.222 --rc genhtml_branch_coverage=1 00:05:48.222 --rc genhtml_function_coverage=1 00:05:48.222 --rc genhtml_legend=1 00:05:48.222 --rc geninfo_all_blocks=1 00:05:48.222 --rc geninfo_unexecuted_blocks=1 00:05:48.222 00:05:48.222 ' 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.222 --rc genhtml_branch_coverage=1 00:05:48.222 --rc genhtml_function_coverage=1 00:05:48.222 --rc genhtml_legend=1 00:05:48.222 --rc geninfo_all_blocks=1 00:05:48.222 --rc geninfo_unexecuted_blocks=1 00:05:48.222 00:05:48.222 ' 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.222 --rc genhtml_branch_coverage=1 00:05:48.222 --rc genhtml_function_coverage=1 00:05:48.222 --rc genhtml_legend=1 00:05:48.222 --rc geninfo_all_blocks=1 00:05:48.222 --rc geninfo_unexecuted_blocks=1 00:05:48.222 00:05:48.222 ' 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.222 --rc genhtml_branch_coverage=1 00:05:48.222 --rc genhtml_function_coverage=1 00:05:48.222 --rc genhtml_legend=1 00:05:48.222 --rc geninfo_all_blocks=1 00:05:48.222 --rc geninfo_unexecuted_blocks=1 00:05:48.222 00:05:48.222 ' 00:05:48.222 08:56:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:48.222 08:56:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3211126 00:05:48.222 08:56:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:48.222 08:56:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3211126 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3211126 ']' 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:48.222 08:56:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.480 [2024-11-20 08:56:25.251423] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:48.480 [2024-11-20 08:56:25.251503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211126 ] 00:05:48.480 [2024-11-20 08:56:25.318354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.480 [2024-11-20 08:56:25.377586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.738 08:56:25 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.738 08:56:25 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:48.738 08:56:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:48.996 { 00:05:48.996 "version": "SPDK v25.01-pre git sha1 7a5718d02", 00:05:48.996 "fields": { 00:05:48.996 "major": 25, 00:05:48.996 "minor": 1, 00:05:48.996 "patch": 0, 00:05:48.996 "suffix": "-pre", 00:05:48.996 "commit": "7a5718d02" 00:05:48.996 } 00:05:48.996 } 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:48.996 08:56:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:48.996 08:56:25 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:49.254 request: 00:05:49.254 { 00:05:49.254 "method": "env_dpdk_get_mem_stats", 00:05:49.254 "req_id": 1 00:05:49.254 } 00:05:49.254 Got JSON-RPC error response 00:05:49.254 response: 00:05:49.254 { 00:05:49.254 "code": -32601, 00:05:49.254 "message": "Method not found" 00:05:49.254 } 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.254 08:56:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3211126 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3211126 ']' 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3211126 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3211126 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3211126' 00:05:49.254 killing process with pid 3211126 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@971 -- # kill 3211126 00:05:49.254 08:56:26 app_cmdline -- common/autotest_common.sh@976 -- # wait 3211126 00:05:49.817 00:05:49.817 real 0m1.617s 00:05:49.817 user 0m1.996s 00:05:49.817 sys 0m0.487s 00:05:49.817 08:56:26 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.817 08:56:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.817 ************************************ 00:05:49.817 END TEST app_cmdline 00:05:49.817 ************************************ 00:05:49.817 08:56:26 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:49.817 08:56:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.817 08:56:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.817 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.817 ************************************ 00:05:49.817 START TEST version 00:05:49.817 ************************************ 00:05:49.817 08:56:26 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:49.817 * Looking for test storage... 00:05:49.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:49.817 08:56:26 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.817 08:56:26 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.817 08:56:26 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.074 08:56:26 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.074 08:56:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.074 08:56:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.074 08:56:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.074 08:56:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.074 08:56:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.074 08:56:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.075 08:56:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.075 08:56:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.075 08:56:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.075 08:56:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.075 08:56:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.075 08:56:26 version -- scripts/common.sh@344 -- # case "$op" in 00:05:50.075 08:56:26 version -- scripts/common.sh@345 -- # : 1 00:05:50.075 08:56:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.075 08:56:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.075 08:56:26 version -- scripts/common.sh@365 -- # decimal 1 00:05:50.075 08:56:26 version -- scripts/common.sh@353 -- # local d=1 00:05:50.075 08:56:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.075 08:56:26 version -- scripts/common.sh@355 -- # echo 1 00:05:50.075 08:56:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.075 08:56:26 version -- scripts/common.sh@366 -- # decimal 2 00:05:50.075 08:56:26 version -- scripts/common.sh@353 -- # local d=2 00:05:50.075 08:56:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.075 08:56:26 version -- scripts/common.sh@355 -- # echo 2 00:05:50.075 08:56:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.075 08:56:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.075 08:56:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.075 08:56:26 version -- scripts/common.sh@368 -- # return 0 00:05:50.075 08:56:26 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.075 08:56:26 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.075 --rc genhtml_branch_coverage=1 00:05:50.075 --rc genhtml_function_coverage=1 00:05:50.075 --rc genhtml_legend=1 00:05:50.075 --rc geninfo_all_blocks=1 00:05:50.075 --rc geninfo_unexecuted_blocks=1 00:05:50.075 00:05:50.075 ' 00:05:50.075 08:56:26 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.075 --rc genhtml_branch_coverage=1 00:05:50.075 --rc genhtml_function_coverage=1 00:05:50.075 --rc genhtml_legend=1 00:05:50.075 --rc geninfo_all_blocks=1 00:05:50.075 --rc geninfo_unexecuted_blocks=1 00:05:50.075 00:05:50.075 ' 00:05:50.075 08:56:26 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.075 --rc genhtml_branch_coverage=1 00:05:50.075 --rc genhtml_function_coverage=1 00:05:50.075 --rc genhtml_legend=1 00:05:50.075 --rc geninfo_all_blocks=1 00:05:50.075 --rc geninfo_unexecuted_blocks=1 00:05:50.075 00:05:50.075 ' 00:05:50.075 08:56:26 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.075 --rc genhtml_branch_coverage=1 00:05:50.075 --rc genhtml_function_coverage=1 00:05:50.075 --rc genhtml_legend=1 00:05:50.075 --rc geninfo_all_blocks=1 00:05:50.075 --rc geninfo_unexecuted_blocks=1 00:05:50.075 00:05:50.075 ' 00:05:50.075 08:56:26 version -- app/version.sh@17 -- # get_header_version major 00:05:50.075 08:56:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.075 08:56:26 version -- app/version.sh@14 -- # cut -f2 00:05:50.075 08:56:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.075 08:56:26 version -- app/version.sh@17 -- # major=25 00:05:50.075 08:56:26 version -- app/version.sh@18 -- # get_header_version minor 00:05:50.075 08:56:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.075 08:56:26 version -- app/version.sh@14 -- # cut -f2 00:05:50.075 08:56:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.075 08:56:26 version -- app/version.sh@18 -- # minor=1 00:05:50.075 08:56:26 version -- app/version.sh@19 -- # get_header_version patch 00:05:50.075 08:56:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.075 08:56:26 version -- app/version.sh@14 -- # cut -f2 00:05:50.075 08:56:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.075 08:56:26 version -- app/version.sh@19 -- # patch=0 00:05:50.075 08:56:26 version -- app/version.sh@20 -- # get_header_version suffix 00:05:50.075 08:56:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:50.075 08:56:26 version -- app/version.sh@14 -- # cut -f2 00:05:50.075 08:56:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:50.075 08:56:26 version -- app/version.sh@20 -- # suffix=-pre 00:05:50.075 08:56:26 version -- app/version.sh@22 -- # version=25.1 00:05:50.075 08:56:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:50.075 08:56:26 version -- app/version.sh@28 -- # version=25.1rc0 00:05:50.075 08:56:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:50.075 08:56:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:50.075 08:56:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:50.075 08:56:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:50.075 00:05:50.075 real 0m0.198s 00:05:50.075 user 0m0.130s 00:05:50.075 sys 0m0.095s 00:05:50.075 08:56:26 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.075 08:56:26 version -- common/autotest_common.sh@10 -- # set +x 00:05:50.075 ************************************ 00:05:50.075 END TEST version 00:05:50.075 ************************************ 00:05:50.075 08:56:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:50.075 08:56:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:50.075 08:56:26 -- spdk/autotest.sh@194 -- # uname -s 00:05:50.075 08:56:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:50.075 08:56:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:50.075 08:56:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:50.075 08:56:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:50.075 08:56:26 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:50.075 08:56:26 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:50.075 08:56:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:50.075 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:50.075 08:56:26 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:50.075 08:56:26 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:50.075 08:56:26 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:50.075 08:56:26 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:50.075 08:56:26 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:50.075 08:56:26 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:50.075 08:56:26 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:50.075 08:56:26 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:50.075 08:56:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.075 08:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:50.075 ************************************ 00:05:50.075 START TEST nvmf_tcp 00:05:50.075 ************************************ 00:05:50.075 08:56:26 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:50.075 * Looking for test storage... 00:05:50.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:50.075 08:56:27 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.075 08:56:27 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.075 08:56:27 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.333 08:56:27 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.333 08:56:27 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:50.333 08:56:27 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.333 08:56:27 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.333 --rc genhtml_branch_coverage=1 00:05:50.333 --rc genhtml_function_coverage=1 00:05:50.333 --rc genhtml_legend=1 00:05:50.333 --rc geninfo_all_blocks=1 00:05:50.333 --rc geninfo_unexecuted_blocks=1 00:05:50.333 00:05:50.333 ' 00:05:50.333 08:56:27 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.333 --rc genhtml_branch_coverage=1 00:05:50.333 --rc genhtml_function_coverage=1 00:05:50.333 --rc genhtml_legend=1 00:05:50.333 --rc geninfo_all_blocks=1 00:05:50.333 --rc geninfo_unexecuted_blocks=1 00:05:50.333 00:05:50.333 ' 00:05:50.333 08:56:27 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.333 --rc genhtml_branch_coverage=1 00:05:50.333 --rc genhtml_function_coverage=1 00:05:50.333 --rc genhtml_legend=1 00:05:50.333 --rc geninfo_all_blocks=1 00:05:50.333 --rc geninfo_unexecuted_blocks=1 00:05:50.333 00:05:50.333 ' 00:05:50.333 08:56:27 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.333 --rc genhtml_branch_coverage=1 00:05:50.333 --rc genhtml_function_coverage=1 00:05:50.333 --rc genhtml_legend=1 00:05:50.333 --rc geninfo_all_blocks=1 00:05:50.333 --rc geninfo_unexecuted_blocks=1 00:05:50.333 00:05:50.333 ' 00:05:50.333 08:56:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:50.333 08:56:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:50.333 08:56:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:50.333 08:56:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:50.333 08:56:27 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.333 08:56:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.333 ************************************ 00:05:50.333 START TEST nvmf_target_core 00:05:50.333 ************************************ 00:05:50.333 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:50.333 * Looking for test storage... 00:05:50.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:50.333 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.333 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.333 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.333 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.334 --rc genhtml_branch_coverage=1 00:05:50.334 --rc genhtml_function_coverage=1 00:05:50.334 --rc genhtml_legend=1 00:05:50.334 --rc geninfo_all_blocks=1 00:05:50.334 --rc geninfo_unexecuted_blocks=1 00:05:50.334 00:05:50.334 ' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.334 --rc genhtml_branch_coverage=1 00:05:50.334 --rc genhtml_function_coverage=1 00:05:50.334 --rc genhtml_legend=1 00:05:50.334 --rc geninfo_all_blocks=1 00:05:50.334 --rc geninfo_unexecuted_blocks=1 00:05:50.334 00:05:50.334 ' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.334 --rc genhtml_branch_coverage=1 00:05:50.334 --rc genhtml_function_coverage=1 00:05:50.334 --rc genhtml_legend=1 00:05:50.334 --rc geninfo_all_blocks=1 00:05:50.334 --rc geninfo_unexecuted_blocks=1 00:05:50.334 00:05:50.334 ' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.334 --rc genhtml_branch_coverage=1 00:05:50.334 --rc genhtml_function_coverage=1 00:05:50.334 --rc genhtml_legend=1 00:05:50.334 --rc geninfo_all_blocks=1 00:05:50.334 --rc geninfo_unexecuted_blocks=1 00:05:50.334 00:05:50.334 ' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:50.334 ************************************ 00:05:50.334 START TEST nvmf_abort 00:05:50.334 ************************************ 00:05:50.334 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:50.593 * Looking for test storage... 00:05:50.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.593 --rc genhtml_branch_coverage=1 00:05:50.593 --rc genhtml_function_coverage=1 00:05:50.593 --rc genhtml_legend=1 00:05:50.593 --rc geninfo_all_blocks=1 00:05:50.593 --rc geninfo_unexecuted_blocks=1 00:05:50.593 00:05:50.593 ' 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.593 --rc genhtml_branch_coverage=1 00:05:50.593 --rc genhtml_function_coverage=1 00:05:50.593 --rc genhtml_legend=1 00:05:50.593 --rc geninfo_all_blocks=1 00:05:50.593 --rc geninfo_unexecuted_blocks=1 00:05:50.593 00:05:50.593 ' 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.593 --rc genhtml_branch_coverage=1 00:05:50.593 --rc genhtml_function_coverage=1 00:05:50.593 --rc genhtml_legend=1 00:05:50.593 --rc geninfo_all_blocks=1 00:05:50.593 --rc geninfo_unexecuted_blocks=1 00:05:50.593 00:05:50.593 ' 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.593 --rc genhtml_branch_coverage=1 00:05:50.593 --rc genhtml_function_coverage=1 00:05:50.593 --rc genhtml_legend=1 00:05:50.593 --rc geninfo_all_blocks=1 00:05:50.593 --rc geninfo_unexecuted_blocks=1 00:05:50.593 00:05:50.593 ' 00:05:50.593 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:50.594 08:56:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.126 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:53.127 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:53.127 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:53.127 Found net devices under 0000:09:00.0: cvl_0_0 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:53.127 Found net devices under 0000:09:00.1: cvl_0_1 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:53.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:05:53.127 00:05:53.127 --- 10.0.0.2 ping statistics --- 00:05:53.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.127 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:53.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:53.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:05:53.127 00:05:53.127 --- 10.0.0.1 ping statistics --- 00:05:53.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.127 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:53.127 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3213217 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3213217 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3213217 ']' 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.128 08:56:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 [2024-11-20 08:56:29.771198] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:05:53.128 [2024-11-20 08:56:29.771299] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.128 [2024-11-20 08:56:29.845856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.128 [2024-11-20 08:56:29.905338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.128 [2024-11-20 08:56:29.905411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.128 [2024-11-20 08:56:29.905424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.128 [2024-11-20 08:56:29.905435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.128 [2024-11-20 08:56:29.905445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.128 [2024-11-20 08:56:29.906921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.128 [2024-11-20 08:56:29.907025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.128 [2024-11-20 08:56:29.907034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 [2024-11-20 08:56:30.050066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 Malloc0 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 Delay0 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 [2024-11-20 08:56:30.132209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.128 08:56:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:53.386 [2024-11-20 08:56:30.247088] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:55.916 Initializing NVMe Controllers 00:05:55.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:55.916 controller IO queue size 128 less than required 00:05:55.916 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:55.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:55.916 Initialization complete. Launching workers. 00:05:55.916 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28848 00:05:55.916 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28909, failed to submit 62 00:05:55.916 success 28852, unsuccessful 57, failed 0 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:55.916 rmmod nvme_tcp 00:05:55.916 rmmod nvme_fabrics 00:05:55.916 rmmod nvme_keyring 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3213217 ']' 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3213217 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3213217 ']' 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3213217 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3213217 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:55.916 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3213217' 00:05:55.916 killing process with pid 3213217 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3213217 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3213217 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.917 08:56:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:57.825 00:05:57.825 real 0m7.375s 00:05:57.825 user 0m10.698s 00:05:57.825 sys 0m2.537s 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:57.825 ************************************ 00:05:57.825 END TEST nvmf_abort 00:05:57.825 ************************************ 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.825 ************************************ 00:05:57.825 START TEST nvmf_ns_hotplug_stress 00:05:57.825 ************************************ 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:57.825 * Looking for test storage... 00:05:57.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.825 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:58.084 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:58.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.085 --rc genhtml_branch_coverage=1 00:05:58.085 --rc genhtml_function_coverage=1 00:05:58.085 --rc genhtml_legend=1 00:05:58.085 --rc geninfo_all_blocks=1 00:05:58.085 --rc geninfo_unexecuted_blocks=1 00:05:58.085 00:05:58.085 ' 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:58.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.085 --rc genhtml_branch_coverage=1 00:05:58.085 --rc genhtml_function_coverage=1 00:05:58.085 --rc genhtml_legend=1 00:05:58.085 --rc geninfo_all_blocks=1 00:05:58.085 --rc geninfo_unexecuted_blocks=1 00:05:58.085 00:05:58.085 ' 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:58.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.085 --rc genhtml_branch_coverage=1 00:05:58.085 --rc genhtml_function_coverage=1 00:05:58.085 --rc genhtml_legend=1 00:05:58.085 --rc geninfo_all_blocks=1 00:05:58.085 --rc geninfo_unexecuted_blocks=1 00:05:58.085 00:05:58.085 ' 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:58.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.085 --rc genhtml_branch_coverage=1 00:05:58.085 --rc genhtml_function_coverage=1 00:05:58.085 --rc genhtml_legend=1 00:05:58.085 --rc geninfo_all_blocks=1 00:05:58.085 --rc geninfo_unexecuted_blocks=1 00:05:58.085 00:05:58.085 ' 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.085 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:58.086 08:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:00.620 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:00.620 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:00.620 Found net devices under 0000:09:00.0: cvl_0_0 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.620 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:00.621 Found net devices under 0000:09:00.1: cvl_0_1 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:00.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:00.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:06:00.621 00:06:00.621 --- 10.0.0.2 ping statistics --- 00:06:00.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.621 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:00.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:00.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:06:00.621 00:06:00.621 --- 10.0.0.1 ping statistics --- 00:06:00.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.621 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3215580 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3215580 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3215580 ']' 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:00.621 [2024-11-20 08:56:37.374880] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:06:00.621 [2024-11-20 08:56:37.374973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.621 [2024-11-20 08:56:37.449130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.621 [2024-11-20 08:56:37.509705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.621 [2024-11-20 08:56:37.509755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.621 [2024-11-20 08:56:37.509782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.621 [2024-11-20 08:56:37.509793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.621 [2024-11-20 08:56:37.509803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.621 [2024-11-20 08:56:37.511285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.621 [2024-11-20 08:56:37.511351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.621 [2024-11-20 08:56:37.511355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.621 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:00.879 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.879 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:00.879 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:01.138 [2024-11-20 08:56:37.918109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.138 08:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:01.395 08:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.653 [2024-11-20 08:56:38.465146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.653 08:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:01.911 08:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:02.168 Malloc0 00:06:02.168 08:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:02.425 Delay0 00:06:02.425 08:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.683 08:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:02.940 NULL1 00:06:02.940 08:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:03.198 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3215997 00:06:03.198 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:03.198 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:03.198 08:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.570 Read completed with error (sct=0, sc=11) 00:06:04.570 08:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.827 08:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:04.827 08:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:05.085 true 00:06:05.085 08:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:05.085 08:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.650 08:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.221 08:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:06.221 08:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:06.221 true 00:06:06.221 08:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:06.221 08:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.530 08:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.812 08:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:06.812 08:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:07.069 true 00:06:07.069 08:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:07.069 08:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.326 08:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.584 08:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:07.584 08:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:07.841 true 00:06:07.841 08:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:07.841 08:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.212 08:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.212 08:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:09.212 08:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:09.469 true 00:06:09.469 08:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:09.469 08:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.726 08:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.983 08:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:09.983 08:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:10.241 true 00:06:10.241 08:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:10.241 08:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.499 08:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.756 08:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:10.756 08:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:11.014 true 00:06:11.014 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:11.014 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.384 08:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.384 08:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:12.384 08:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:12.641 true 00:06:12.641 08:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:12.641 08:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.898 08:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.155 08:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:13.155 08:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:13.413 true 00:06:13.413 08:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:13.413 08:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.344 08:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:14.601 08:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:14.601 08:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:14.858 true 00:06:14.858 08:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:14.858 08:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.115 08:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.372 08:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:15.372 08:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:15.629 true 00:06:15.629 08:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:15.629 08:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.560 08:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.817 08:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:16.817 08:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:17.074 true 00:06:17.074 08:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:17.074 08:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.332 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.590 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:17.590 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:17.847 true 00:06:17.847 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:17.847 08:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.778 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.778 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:18.778 08:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:19.035 true 00:06:19.293 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:19.293 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.550 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.808 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:19.808 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:20.066 true 00:06:20.066 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:20.066 08:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.324 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.581 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:20.581 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:20.837 true 00:06:20.837 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:20.837 08:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.765 08:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.022 08:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:22.022 08:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:22.279 true 00:06:22.279 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:22.279 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.537 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.795 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:22.795 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:23.053 true 00:06:23.053 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:23.053 08:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.310 08:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.567 08:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:23.567 08:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:23.823 true 00:06:23.823 08:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:23.823 08:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.752 08:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.266 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:25.266 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:25.523 true 00:06:25.524 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:25.524 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.780 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.037 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:26.037 08:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:26.293 true 00:06:26.293 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:26.293 08:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.225 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.482 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:27.482 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:27.740 true 00:06:27.740 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:27.740 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.998 08:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.255 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:28.255 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:28.512 true 00:06:28.512 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:28.512 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.771 08:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.028 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:29.028 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:29.286 true 00:06:29.286 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:29.286 08:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.220 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.478 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:30.478 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:31.043 true 00:06:31.043 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:31.043 08:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.300 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.558 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:31.558 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:31.817 true 00:06:31.817 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:31.817 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.074 08:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.332 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:32.332 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:32.597 true 00:06:32.597 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:32.597 08:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.535 08:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.535 Initializing NVMe Controllers 00:06:33.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:33.535 Controller IO queue size 128, less than required. 00:06:33.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.535 Controller IO queue size 128, less than required. 00:06:33.535 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:33.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:33.535 Initialization complete. Launching workers. 00:06:33.535 ======================================================== 00:06:33.535 Latency(us) 00:06:33.535 Device Information : IOPS MiB/s Average min max 00:06:33.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 839.30 0.41 69329.76 3182.75 1034796.31 00:06:33.535 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9435.47 4.61 13567.05 3289.84 455987.29 00:06:33.535 ======================================================== 00:06:33.535 Total : 10274.77 5.02 18122.07 3182.75 1034796.31 00:06:33.535 00:06:33.793 08:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:33.793 08:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:34.051 true 00:06:34.051 08:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3215997 00:06:34.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3215997) - No such process 00:06:34.051 08:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3215997 00:06:34.051 08:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.308 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:34.566 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:34.566 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:34.566 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:34.566 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:34.566 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:34.823 null0 00:06:34.823 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:34.823 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:34.823 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:35.081 null1 00:06:35.081 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.081 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.081 08:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:35.362 null2 00:06:35.362 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.362 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.362 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:35.655 null3 00:06:35.655 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.655 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.655 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:35.915 null4 00:06:35.915 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:35.915 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:35.915 08:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:36.173 null5 00:06:36.173 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.173 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.173 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:36.430 null6 00:06:36.430 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.430 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.430 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:36.688 null7 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3220691 3220692 3220694 3220697 3220699 3220701 3220703 3220705 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:36.688 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:36.946 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:36.946 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:36.946 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:36.946 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:36.946 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:36.946 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.946 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:36.946 08:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.205 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.464 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:37.464 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:37.464 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:37.464 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:37.721 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:37.721 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:37.722 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:37.722 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.979 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.979 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.979 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:37.979 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:37.980 08:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.237 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.237 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.237 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.237 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.237 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.237 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:38.237 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.237 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:38.496 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:38.755 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:38.755 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:38.755 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:38.755 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:38.755 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:38.755 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:38.755 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.755 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.014 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.014 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.014 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.014 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.014 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.014 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.014 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.014 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.014 08:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:39.014 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.271 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:39.529 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:39.529 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:39.529 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:39.529 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:39.529 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:39.529 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.529 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:39.787 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.045 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.045 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.045 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.045 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.045 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.045 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.045 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.045 08:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.303 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.304 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.304 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.304 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.304 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.304 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.304 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.304 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:40.562 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:40.562 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:40.562 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.562 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:40.562 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:40.562 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:40.562 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:40.562 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:40.820 08:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.386 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:41.644 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:41.902 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:41.902 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:41.902 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:41.902 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:41.902 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:41.902 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.902 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:41.902 08:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.159 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.160 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:42.160 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.160 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.160 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:42.419 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:42.419 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:42.419 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.419 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:42.419 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:42.419 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:42.419 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.419 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:42.676 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:42.676 rmmod nvme_tcp 00:06:42.676 rmmod nvme_fabrics 00:06:42.676 rmmod nvme_keyring 00:06:42.934 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:42.934 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:42.934 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3215580 ']' 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3215580 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3215580 ']' 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3215580 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3215580 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3215580' 00:06:42.935 killing process with pid 3215580 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3215580 00:06:42.935 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3215580 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.193 08:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.098 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:45.098 00:06:45.098 real 0m47.267s 00:06:45.098 user 3m39.767s 00:06:45.098 sys 0m16.125s 00:06:45.098 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.098 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.098 ************************************ 00:06:45.098 END TEST nvmf_ns_hotplug_stress 00:06:45.098 ************************************ 00:06:45.098 08:57:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:45.098 08:57:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:45.098 08:57:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.098 08:57:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.098 ************************************ 00:06:45.098 START TEST nvmf_delete_subsystem 00:06:45.098 ************************************ 00:06:45.098 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:45.356 * Looking for test storage... 00:06:45.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.356 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.357 --rc genhtml_branch_coverage=1 00:06:45.357 --rc genhtml_function_coverage=1 00:06:45.357 --rc genhtml_legend=1 00:06:45.357 --rc geninfo_all_blocks=1 00:06:45.357 --rc geninfo_unexecuted_blocks=1 00:06:45.357 00:06:45.357 ' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.357 --rc genhtml_branch_coverage=1 00:06:45.357 --rc genhtml_function_coverage=1 00:06:45.357 --rc genhtml_legend=1 00:06:45.357 --rc geninfo_all_blocks=1 00:06:45.357 --rc geninfo_unexecuted_blocks=1 00:06:45.357 00:06:45.357 ' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.357 --rc genhtml_branch_coverage=1 00:06:45.357 --rc genhtml_function_coverage=1 00:06:45.357 --rc genhtml_legend=1 00:06:45.357 --rc geninfo_all_blocks=1 00:06:45.357 --rc geninfo_unexecuted_blocks=1 00:06:45.357 00:06:45.357 ' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.357 --rc genhtml_branch_coverage=1 00:06:45.357 --rc genhtml_function_coverage=1 00:06:45.357 --rc genhtml_legend=1 00:06:45.357 --rc geninfo_all_blocks=1 00:06:45.357 --rc geninfo_unexecuted_blocks=1 00:06:45.357 00:06:45.357 ' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.357 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.358 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.358 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.358 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.358 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.358 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.358 08:57:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:47.888 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:47.889 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:47.889 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:47.889 Found net devices under 0000:09:00.0: cvl_0_0 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:47.889 Found net devices under 0000:09:00.1: cvl_0_1 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:47.889 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:47.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:47.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:06:47.889 00:06:47.889 --- 10.0.0.2 ping statistics --- 00:06:47.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.889 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:47.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:47.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:06:47.890 00:06:47.890 --- 10.0.0.1 ping statistics --- 00:06:47.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:47.890 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3223588 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3223588 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3223588 ']' 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:47.890 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:47.890 [2024-11-20 08:57:24.704641] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:06:47.890 [2024-11-20 08:57:24.704734] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.890 [2024-11-20 08:57:24.778960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.890 [2024-11-20 08:57:24.838254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:47.890 [2024-11-20 08:57:24.838329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:47.890 [2024-11-20 08:57:24.838345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.890 [2024-11-20 08:57:24.838356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.890 [2024-11-20 08:57:24.838366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:47.890 [2024-11-20 08:57:24.839777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.890 [2024-11-20 08:57:24.839783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.148 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.148 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:48.148 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:48.148 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.148 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.148 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.149 [2024-11-20 08:57:24.981741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.149 08:57:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.149 [2024-11-20 08:57:24.997939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.149 NULL1 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.149 Delay0 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3223620 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:48.149 08:57:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:48.149 [2024-11-20 08:57:25.082759] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:50.048 08:57:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:50.048 08:57:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.048 08:57:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 [2024-11-20 08:57:27.205420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0ec00d020 is same with the state(6) to be set 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 starting I/O failed: -6 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Read completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.305 Write completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Write completed with error (sct=0, sc=8) 00:06:50.306 Read completed with error (sct=0, sc=8) 00:06:50.306 starting I/O failed: -6 00:06:50.306 starting I/O failed: -6 00:06:50.306 starting I/O failed: -6 00:06:50.306 starting I/O failed: -6 00:06:50.306 starting I/O failed: -6 00:06:51.239 [2024-11-20 08:57:28.178776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ceb9a0 is same with the state(6) to be set 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 [2024-11-20 08:57:28.207644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea2c0 is same with the state(6) to be set 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 [2024-11-20 08:57:28.207891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea4a0 is same with the state(6) to be set 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 [2024-11-20 08:57:28.208117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cea860 is same with the state(6) to be set 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Read completed with error (sct=0, sc=8) 00:06:51.239 Write completed with error (sct=0, sc=8) 00:06:51.239 [2024-11-20 08:57:28.208260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb0ec00d350 is same with the state(6) to be set 00:06:51.239 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.239 Initializing NVMe Controllers 00:06:51.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:51.239 Controller IO queue size 128, less than required. 00:06:51.239 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:51.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:51.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:51.239 Initialization complete. Launching workers. 00:06:51.239 ======================================================== 00:06:51.239 Latency(us) 00:06:51.239 Device Information : IOPS MiB/s Average min max 00:06:51.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.54 0.09 965416.77 775.74 1012077.43 00:06:51.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.81 0.07 886307.79 494.90 1012654.98 00:06:51.239 ======================================================== 00:06:51.239 Total : 340.34 0.17 929898.45 494.90 1012654.98 00:06:51.239 00:06:51.239 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:51.239 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3223620 00:06:51.239 [2024-11-20 08:57:28.209360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ceb9a0 (9): Bad file descriptor 00:06:51.239 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:51.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3223620 00:06:51.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3223620) - No such process 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3223620 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3223620 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3223620 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.807 [2024-11-20 08:57:28.730388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3224031 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3224031 00:06:51.807 08:57:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:51.807 [2024-11-20 08:57:28.795048] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:52.372 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:52.372 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3224031 00:06:52.372 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:52.938 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:52.938 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3224031 00:06:52.938 08:57:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:53.502 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:53.502 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3224031 00:06:53.502 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:53.760 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:53.760 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3224031 00:06:53.760 08:57:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:54.325 08:57:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:54.325 08:57:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3224031 00:06:54.325 08:57:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:54.890 08:57:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:54.890 08:57:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3224031 00:06:54.890 08:57:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:55.148 Initializing NVMe Controllers 00:06:55.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.148 Controller IO queue size 128, less than required. 00:06:55.148 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:55.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:55.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:55.148 Initialization complete. Launching workers. 00:06:55.148 ======================================================== 00:06:55.148 Latency(us) 00:06:55.148 Device Information : IOPS MiB/s Average min max 00:06:55.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003513.52 1000152.90 1012132.82 00:06:55.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004950.16 1000234.15 1041522.65 00:06:55.148 ======================================================== 00:06:55.148 Total : 256.00 0.12 1004231.84 1000152.90 1041522.65 00:06:55.148 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3224031 00:06:55.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3224031) - No such process 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3224031 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:55.406 rmmod nvme_tcp 00:06:55.406 rmmod nvme_fabrics 00:06:55.406 rmmod nvme_keyring 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3223588 ']' 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3223588 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3223588 ']' 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3223588 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3223588 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3223588' 00:06:55.406 killing process with pid 3223588 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3223588 00:06:55.406 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3223588 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.664 08:57:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:58.201 00:06:58.201 real 0m12.549s 00:06:58.201 user 0m28.042s 00:06:58.201 sys 0m3.090s 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.201 ************************************ 00:06:58.201 END TEST nvmf_delete_subsystem 00:06:58.201 ************************************ 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:58.201 ************************************ 00:06:58.201 START TEST nvmf_host_management 00:06:58.201 ************************************ 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:58.201 * Looking for test storage... 00:06:58.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.201 --rc genhtml_branch_coverage=1 00:06:58.201 --rc genhtml_function_coverage=1 00:06:58.201 --rc genhtml_legend=1 00:06:58.201 --rc geninfo_all_blocks=1 00:06:58.201 --rc geninfo_unexecuted_blocks=1 00:06:58.201 00:06:58.201 ' 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.201 --rc genhtml_branch_coverage=1 00:06:58.201 --rc genhtml_function_coverage=1 00:06:58.201 --rc genhtml_legend=1 00:06:58.201 --rc geninfo_all_blocks=1 00:06:58.201 --rc geninfo_unexecuted_blocks=1 00:06:58.201 00:06:58.201 ' 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.201 --rc genhtml_branch_coverage=1 00:06:58.201 --rc genhtml_function_coverage=1 00:06:58.201 --rc genhtml_legend=1 00:06:58.201 --rc geninfo_all_blocks=1 00:06:58.201 --rc geninfo_unexecuted_blocks=1 00:06:58.201 00:06:58.201 ' 00:06:58.201 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.201 --rc genhtml_branch_coverage=1 00:06:58.202 --rc genhtml_function_coverage=1 00:06:58.202 --rc genhtml_legend=1 00:06:58.202 --rc geninfo_all_blocks=1 00:06:58.202 --rc geninfo_unexecuted_blocks=1 00:06:58.202 00:06:58.202 ' 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:58.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:58.202 08:57:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:00.111 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:00.111 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:00.111 Found net devices under 0000:09:00.0: cvl_0_0 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:00.111 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:00.112 Found net devices under 0000:09:00.1: cvl_0_1 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:00.112 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:00.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:07:00.371 00:07:00.371 --- 10.0.0.2 ping statistics --- 00:07:00.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.371 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:07:00.371 00:07:00.371 --- 10.0.0.1 ping statistics --- 00:07:00.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.371 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3226497 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3226497 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3226497 ']' 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.371 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.371 [2024-11-20 08:57:37.243173] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:07:00.371 [2024-11-20 08:57:37.243270] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.371 [2024-11-20 08:57:37.313618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.371 [2024-11-20 08:57:37.370610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.371 [2024-11-20 08:57:37.370664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.371 [2024-11-20 08:57:37.370692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.371 [2024-11-20 08:57:37.370703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.371 [2024-11-20 08:57:37.370711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.371 [2024-11-20 08:57:37.372336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.371 [2024-11-20 08:57:37.372385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.371 [2024-11-20 08:57:37.372435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:00.371 [2024-11-20 08:57:37.372439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.630 [2024-11-20 08:57:37.519332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.630 Malloc0 00:07:00.630 [2024-11-20 08:57:37.589863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3226553 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3226553 /var/tmp/bdevperf.sock 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3226553 ']' 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:00.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:00.630 { 00:07:00.630 "params": { 00:07:00.630 "name": "Nvme$subsystem", 00:07:00.630 "trtype": "$TEST_TRANSPORT", 00:07:00.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:00.630 "adrfam": "ipv4", 00:07:00.630 "trsvcid": "$NVMF_PORT", 00:07:00.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:00.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:00.630 "hdgst": ${hdgst:-false}, 00:07:00.630 "ddgst": ${ddgst:-false} 00:07:00.630 }, 00:07:00.630 "method": "bdev_nvme_attach_controller" 00:07:00.630 } 00:07:00.630 EOF 00:07:00.630 )") 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:00.630 08:57:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:00.630 "params": { 00:07:00.630 "name": "Nvme0", 00:07:00.630 "trtype": "tcp", 00:07:00.630 "traddr": "10.0.0.2", 00:07:00.630 "adrfam": "ipv4", 00:07:00.630 "trsvcid": "4420", 00:07:00.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:00.630 "hdgst": false, 00:07:00.630 "ddgst": false 00:07:00.630 }, 00:07:00.630 "method": "bdev_nvme_attach_controller" 00:07:00.630 }' 00:07:00.888 [2024-11-20 08:57:37.665100] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:07:00.888 [2024-11-20 08:57:37.665175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226553 ] 00:07:00.888 [2024-11-20 08:57:37.733541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.888 [2024-11-20 08:57:37.794140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.146 Running I/O for 10 seconds... 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.146 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.403 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.403 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:01.403 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:01.403 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.662 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.662 [2024-11-20 08:57:38.500956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.662 [2024-11-20 08:57:38.501079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.501325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8df10 is same with the state(6) to be set 00:07:01.663 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.663 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:01.663 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.663 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.663 [2024-11-20 08:57:38.509391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.663 [2024-11-20 08:57:38.509433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.509451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.663 [2024-11-20 08:57:38.509466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.509480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.663 [2024-11-20 08:57:38.509493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.509506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.663 [2024-11-20 08:57:38.509519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.509532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca40 is same with the state(6) to be set 00:07:01.663 [2024-11-20 08:57:38.509855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.509881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.509909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.509924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.509940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.509954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.509969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.509982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.509997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.663 [2024-11-20 08:57:38.510579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.663 [2024-11-20 08:57:38.510593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.510979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.510994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.664 [2024-11-20 08:57:38.511721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.664 [2024-11-20 08:57:38.511735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.665 [2024-11-20 08:57:38.511749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.665 [2024-11-20 08:57:38.512941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:01.665 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.665 08:57:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:01.665 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:01.665 00:07:01.665 Latency(us) 00:07:01.665 [2024-11-20T07:57:38.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.665 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:01.665 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:01.665 Verification LBA range: start 0x0 length 0x400 00:07:01.665 Nvme0n1 : 0.41 1571.42 98.21 157.14 0.00 35976.91 2779.21 34175.81 00:07:01.665 [2024-11-20T07:57:38.694Z] =================================================================================================================== 00:07:01.665 [2024-11-20T07:57:38.694Z] Total : 1571.42 98.21 157.14 0.00 35976.91 2779.21 34175.81 00:07:01.665 [2024-11-20 08:57:38.514832] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.665 [2024-11-20 08:57:38.514875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca40 (9): Bad file descriptor 00:07:01.665 [2024-11-20 08:57:38.525123] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3226553 00:07:02.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3226553) - No such process 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:02.596 { 00:07:02.596 "params": { 00:07:02.596 "name": "Nvme$subsystem", 00:07:02.596 "trtype": "$TEST_TRANSPORT", 00:07:02.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:02.596 "adrfam": "ipv4", 00:07:02.596 "trsvcid": "$NVMF_PORT", 00:07:02.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:02.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:02.596 "hdgst": ${hdgst:-false}, 00:07:02.596 "ddgst": ${ddgst:-false} 00:07:02.596 }, 00:07:02.596 "method": "bdev_nvme_attach_controller" 00:07:02.596 } 00:07:02.596 EOF 00:07:02.596 )") 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:02.596 08:57:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:02.596 "params": { 00:07:02.596 "name": "Nvme0", 00:07:02.596 "trtype": "tcp", 00:07:02.596 "traddr": "10.0.0.2", 00:07:02.597 "adrfam": "ipv4", 00:07:02.597 "trsvcid": "4420", 00:07:02.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:02.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:02.597 "hdgst": false, 00:07:02.597 "ddgst": false 00:07:02.597 }, 00:07:02.597 "method": "bdev_nvme_attach_controller" 00:07:02.597 }' 00:07:02.597 [2024-11-20 08:57:39.565241] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:07:02.597 [2024-11-20 08:57:39.565344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226825 ] 00:07:02.854 [2024-11-20 08:57:39.633892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.854 [2024-11-20 08:57:39.694927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.854 Running I/O for 1 seconds... 00:07:04.228 1600.00 IOPS, 100.00 MiB/s 00:07:04.228 Latency(us) 00:07:04.228 [2024-11-20T07:57:41.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.228 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:04.228 Verification LBA range: start 0x0 length 0x400 00:07:04.228 Nvme0n1 : 1.02 1631.71 101.98 0.00 0.00 38591.97 4636.07 34758.35 00:07:04.228 [2024-11-20T07:57:41.257Z] =================================================================================================================== 00:07:04.228 [2024-11-20T07:57:41.257Z] Total : 1631.71 101.98 0.00 0.00 38591.97 4636.07 34758.35 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:04.228 rmmod nvme_tcp 00:07:04.228 rmmod nvme_fabrics 00:07:04.228 rmmod nvme_keyring 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3226497 ']' 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3226497 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3226497 ']' 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3226497 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3226497 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3226497' 00:07:04.228 killing process with pid 3226497 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3226497 00:07:04.228 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3226497 00:07:04.487 [2024-11-20 08:57:41.459628] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.487 08:57:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:07.018 00:07:07.018 real 0m8.840s 00:07:07.018 user 0m19.499s 00:07:07.018 sys 0m2.791s 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.018 ************************************ 00:07:07.018 END TEST nvmf_host_management 00:07:07.018 ************************************ 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.018 ************************************ 00:07:07.018 START TEST nvmf_lvol 00:07:07.018 ************************************ 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:07.018 * Looking for test storage... 00:07:07.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:07.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.018 --rc genhtml_branch_coverage=1 00:07:07.018 --rc genhtml_function_coverage=1 00:07:07.018 --rc genhtml_legend=1 00:07:07.018 --rc geninfo_all_blocks=1 00:07:07.018 --rc geninfo_unexecuted_blocks=1 00:07:07.018 00:07:07.018 ' 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:07.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.018 --rc genhtml_branch_coverage=1 00:07:07.018 --rc genhtml_function_coverage=1 00:07:07.018 --rc genhtml_legend=1 00:07:07.018 --rc geninfo_all_blocks=1 00:07:07.018 --rc geninfo_unexecuted_blocks=1 00:07:07.018 00:07:07.018 ' 00:07:07.018 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:07.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.019 --rc genhtml_branch_coverage=1 00:07:07.019 --rc genhtml_function_coverage=1 00:07:07.019 --rc genhtml_legend=1 00:07:07.019 --rc geninfo_all_blocks=1 00:07:07.019 --rc geninfo_unexecuted_blocks=1 00:07:07.019 00:07:07.019 ' 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:07.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.019 --rc genhtml_branch_coverage=1 00:07:07.019 --rc genhtml_function_coverage=1 00:07:07.019 --rc genhtml_legend=1 00:07:07.019 --rc geninfo_all_blocks=1 00:07:07.019 --rc geninfo_unexecuted_blocks=1 00:07:07.019 00:07:07.019 ' 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:07.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:07.019 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:07.020 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:07.020 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.020 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.020 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.020 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:07.020 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:07.020 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:07.020 08:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.978 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:09.237 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:09.237 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:09.237 Found net devices under 0000:09:00.0: cvl_0_0 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:09.237 Found net devices under 0000:09:00.1: cvl_0_1 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:09.237 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:09.238 08:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:09.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:07:09.238 00:07:09.238 --- 10.0.0.2 ping statistics --- 00:07:09.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.238 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:07:09.238 00:07:09.238 --- 10.0.0.1 ping statistics --- 00:07:09.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.238 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3228945 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3228945 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3228945 ']' 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:09.238 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:09.238 [2024-11-20 08:57:46.210679] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:07:09.238 [2024-11-20 08:57:46.210752] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.496 [2024-11-20 08:57:46.282932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.496 [2024-11-20 08:57:46.337090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.496 [2024-11-20 08:57:46.337141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.496 [2024-11-20 08:57:46.337168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.496 [2024-11-20 08:57:46.337178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.496 [2024-11-20 08:57:46.337187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.496 [2024-11-20 08:57:46.338678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.496 [2024-11-20 08:57:46.338740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.496 [2024-11-20 08:57:46.338744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.496 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.496 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:09.496 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:09.496 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.496 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:09.496 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.496 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:09.754 [2024-11-20 08:57:46.736990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.754 08:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:10.320 08:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:10.320 08:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:10.577 08:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:10.577 08:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:10.835 08:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:11.092 08:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6f6fc4fb-fef8-461e-8fa0-3efa1da66c1a 00:07:11.092 08:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f6fc4fb-fef8-461e-8fa0-3efa1da66c1a lvol 20 00:07:11.350 08:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bfc4b422-de5a-4a84-a042-4a71b6642971 00:07:11.350 08:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.608 08:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bfc4b422-de5a-4a84-a042-4a71b6642971 00:07:11.867 08:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.125 [2024-11-20 08:57:48.991094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.125 08:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.382 08:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3229361 00:07:12.382 08:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:12.382 08:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:13.315 08:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bfc4b422-de5a-4a84-a042-4a71b6642971 MY_SNAPSHOT 00:07:13.573 08:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=12cd5eac-9f00-4501-9ac8-bd9e6387baec 00:07:13.573 08:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bfc4b422-de5a-4a84-a042-4a71b6642971 30 00:07:14.139 08:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 12cd5eac-9f00-4501-9ac8-bd9e6387baec MY_CLONE 00:07:14.396 08:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a81809b6-6b53-4967-bd6e-f5676004d7e3 00:07:14.396 08:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a81809b6-6b53-4967-bd6e-f5676004d7e3 00:07:14.962 08:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3229361 00:07:23.068 Initializing NVMe Controllers 00:07:23.068 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:23.068 Controller IO queue size 128, less than required. 00:07:23.068 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:23.068 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:23.068 Initialization complete. Launching workers. 00:07:23.068 ======================================================== 00:07:23.068 Latency(us) 00:07:23.068 Device Information : IOPS MiB/s Average min max 00:07:23.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10211.80 39.89 12539.04 1951.17 121051.68 00:07:23.068 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10337.10 40.38 12383.49 2418.09 59361.95 00:07:23.068 ======================================================== 00:07:23.068 Total : 20548.90 80.27 12460.79 1951.17 121051.68 00:07:23.068 00:07:23.068 08:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:23.068 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bfc4b422-de5a-4a84-a042-4a71b6642971 00:07:23.326 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f6fc4fb-fef8-461e-8fa0-3efa1da66c1a 00:07:23.585 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:23.585 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:23.585 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:23.585 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.585 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:23.585 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.585 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:23.585 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.585 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.585 rmmod nvme_tcp 00:07:23.843 rmmod nvme_fabrics 00:07:23.843 rmmod nvme_keyring 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3228945 ']' 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3228945 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3228945 ']' 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3228945 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3228945 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3228945' 00:07:23.843 killing process with pid 3228945 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3228945 00:07:23.843 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3228945 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.103 08:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.010 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.010 00:07:26.010 real 0m19.425s 00:07:26.010 user 1m5.693s 00:07:26.010 sys 0m5.664s 00:07:26.010 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.010 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.010 ************************************ 00:07:26.010 END TEST nvmf_lvol 00:07:26.010 ************************************ 00:07:26.010 08:58:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:26.010 08:58:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:26.010 08:58:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.010 08:58:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.268 ************************************ 00:07:26.268 START TEST nvmf_lvs_grow 00:07:26.268 ************************************ 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:26.268 * Looking for test storage... 00:07:26.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:26.268 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:26.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.269 --rc genhtml_branch_coverage=1 00:07:26.269 --rc genhtml_function_coverage=1 00:07:26.269 --rc genhtml_legend=1 00:07:26.269 --rc geninfo_all_blocks=1 00:07:26.269 --rc geninfo_unexecuted_blocks=1 00:07:26.269 00:07:26.269 ' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:26.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.269 --rc genhtml_branch_coverage=1 00:07:26.269 --rc genhtml_function_coverage=1 00:07:26.269 --rc genhtml_legend=1 00:07:26.269 --rc geninfo_all_blocks=1 00:07:26.269 --rc geninfo_unexecuted_blocks=1 00:07:26.269 00:07:26.269 ' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:26.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.269 --rc genhtml_branch_coverage=1 00:07:26.269 --rc genhtml_function_coverage=1 00:07:26.269 --rc genhtml_legend=1 00:07:26.269 --rc geninfo_all_blocks=1 00:07:26.269 --rc geninfo_unexecuted_blocks=1 00:07:26.269 00:07:26.269 ' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:26.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.269 --rc genhtml_branch_coverage=1 00:07:26.269 --rc genhtml_function_coverage=1 00:07:26.269 --rc genhtml_legend=1 00:07:26.269 --rc geninfo_all_blocks=1 00:07:26.269 --rc geninfo_unexecuted_blocks=1 00:07:26.269 00:07:26.269 ' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.269 08:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:28.801 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:28.801 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:28.801 Found net devices under 0000:09:00.0: cvl_0_0 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:28.801 Found net devices under 0000:09:00.1: cvl_0_1 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:28.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:07:28.801 00:07:28.801 --- 10.0.0.2 ping statistics --- 00:07:28.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.801 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:28.801 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:07:28.802 00:07:28.802 --- 10.0.0.1 ping statistics --- 00:07:28.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.802 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3232761 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3232761 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3232761 ']' 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:28.802 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.802 [2024-11-20 08:58:05.700642] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:07:28.802 [2024-11-20 08:58:05.700734] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.802 [2024-11-20 08:58:05.774211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.060 [2024-11-20 08:58:05.834200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.060 [2024-11-20 08:58:05.834243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.060 [2024-11-20 08:58:05.834272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.060 [2024-11-20 08:58:05.834282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.060 [2024-11-20 08:58:05.834292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.060 [2024-11-20 08:58:05.834931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.060 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:29.060 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:29.060 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:29.060 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:29.060 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.060 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.060 08:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.318 [2024-11-20 08:58:06.229053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.318 ************************************ 00:07:29.318 START TEST lvs_grow_clean 00:07:29.318 ************************************ 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.318 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:29.575 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:29.575 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:29.833 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:29.833 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:29.833 08:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:30.400 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:30.400 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:30.400 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dca13a9e-dba5-4799-9aba-2a2982c311cd lvol 150 00:07:30.400 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8b12b09e-4b6b-432b-a246-3c735018ab5e 00:07:30.400 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.400 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:30.658 [2024-11-20 08:58:07.654731] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:30.658 [2024-11-20 08:58:07.654823] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:30.658 true 00:07:30.658 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:30.658 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:31.224 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:31.224 08:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.224 08:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b12b09e-4b6b-432b-a246-3c735018ab5e 00:07:31.482 08:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.739 [2024-11-20 08:58:08.742003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.740 08:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3233207 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3233207 /var/tmp/bdevperf.sock 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3233207 ']' 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.305 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:32.305 [2024-11-20 08:58:09.071809] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:07:32.305 [2024-11-20 08:58:09.071881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3233207 ] 00:07:32.305 [2024-11-20 08:58:09.137425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.305 [2024-11-20 08:58:09.198085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.562 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:32.562 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:32.562 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:32.821 Nvme0n1 00:07:32.821 08:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:33.078 [ 00:07:33.078 { 00:07:33.078 "name": "Nvme0n1", 00:07:33.078 "aliases": [ 00:07:33.078 "8b12b09e-4b6b-432b-a246-3c735018ab5e" 00:07:33.078 ], 00:07:33.078 "product_name": "NVMe disk", 00:07:33.078 "block_size": 4096, 00:07:33.078 "num_blocks": 38912, 00:07:33.078 "uuid": "8b12b09e-4b6b-432b-a246-3c735018ab5e", 00:07:33.078 "numa_id": 0, 00:07:33.078 "assigned_rate_limits": { 00:07:33.078 "rw_ios_per_sec": 0, 00:07:33.078 "rw_mbytes_per_sec": 0, 00:07:33.078 "r_mbytes_per_sec": 0, 00:07:33.078 "w_mbytes_per_sec": 0 00:07:33.078 }, 00:07:33.078 "claimed": false, 00:07:33.078 "zoned": false, 00:07:33.078 "supported_io_types": { 00:07:33.078 "read": true, 00:07:33.078 "write": true, 00:07:33.078 "unmap": true, 00:07:33.078 "flush": true, 00:07:33.078 "reset": true, 00:07:33.078 "nvme_admin": true, 00:07:33.078 "nvme_io": true, 00:07:33.078 "nvme_io_md": false, 00:07:33.078 "write_zeroes": true, 00:07:33.078 "zcopy": false, 00:07:33.078 "get_zone_info": false, 00:07:33.078 "zone_management": false, 00:07:33.078 "zone_append": false, 00:07:33.078 "compare": true, 00:07:33.078 "compare_and_write": true, 00:07:33.078 "abort": true, 00:07:33.078 "seek_hole": false, 00:07:33.078 "seek_data": false, 00:07:33.078 "copy": true, 00:07:33.078 "nvme_iov_md": false 00:07:33.078 }, 00:07:33.078 "memory_domains": [ 00:07:33.078 { 00:07:33.078 "dma_device_id": "system", 00:07:33.078 "dma_device_type": 1 00:07:33.078 } 00:07:33.078 ], 00:07:33.078 "driver_specific": { 00:07:33.078 "nvme": [ 00:07:33.078 { 00:07:33.078 "trid": { 00:07:33.078 "trtype": "TCP", 00:07:33.078 "adrfam": "IPv4", 00:07:33.078 "traddr": "10.0.0.2", 00:07:33.078 "trsvcid": "4420", 00:07:33.078 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:33.078 }, 00:07:33.078 "ctrlr_data": { 00:07:33.078 "cntlid": 1, 00:07:33.078 "vendor_id": "0x8086", 00:07:33.078 "model_number": "SPDK bdev Controller", 00:07:33.078 "serial_number": "SPDK0", 00:07:33.078 "firmware_revision": "25.01", 00:07:33.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:33.078 "oacs": { 00:07:33.078 "security": 0, 00:07:33.078 "format": 0, 00:07:33.078 "firmware": 0, 00:07:33.078 "ns_manage": 0 00:07:33.078 }, 00:07:33.078 "multi_ctrlr": true, 00:07:33.078 "ana_reporting": false 00:07:33.078 }, 00:07:33.078 "vs": { 00:07:33.078 "nvme_version": "1.3" 00:07:33.078 }, 00:07:33.078 "ns_data": { 00:07:33.078 "id": 1, 00:07:33.078 "can_share": true 00:07:33.078 } 00:07:33.078 } 00:07:33.078 ], 00:07:33.078 "mp_policy": "active_passive" 00:07:33.078 } 00:07:33.078 } 00:07:33.078 ] 00:07:33.078 08:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3233342 00:07:33.078 08:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:33.078 08:58:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:33.336 Running I/O for 10 seconds... 00:07:34.271 Latency(us) 00:07:34.271 [2024-11-20T07:58:11.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.271 Nvme0n1 : 1.00 14606.00 57.05 0.00 0.00 0.00 0.00 0.00 00:07:34.271 [2024-11-20T07:58:11.300Z] =================================================================================================================== 00:07:34.271 [2024-11-20T07:58:11.300Z] Total : 14606.00 57.05 0.00 0.00 0.00 0.00 0.00 00:07:34.271 00:07:35.204 08:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:35.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.204 Nvme0n1 : 2.00 14859.50 58.04 0.00 0.00 0.00 0.00 0.00 00:07:35.204 [2024-11-20T07:58:12.233Z] =================================================================================================================== 00:07:35.204 [2024-11-20T07:58:12.233Z] Total : 14859.50 58.04 0.00 0.00 0.00 0.00 0.00 00:07:35.204 00:07:35.462 true 00:07:35.462 08:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:35.462 08:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:35.720 08:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:35.720 08:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:35.720 08:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3233342 00:07:36.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.286 Nvme0n1 : 3.00 14944.33 58.38 0.00 0.00 0.00 0.00 0.00 00:07:36.286 [2024-11-20T07:58:13.315Z] =================================================================================================================== 00:07:36.286 [2024-11-20T07:58:13.315Z] Total : 14944.33 58.38 0.00 0.00 0.00 0.00 0.00 00:07:36.286 00:07:37.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.221 Nvme0n1 : 4.00 15036.00 58.73 0.00 0.00 0.00 0.00 0.00 00:07:37.221 [2024-11-20T07:58:14.250Z] =================================================================================================================== 00:07:37.221 [2024-11-20T07:58:14.250Z] Total : 15036.00 58.73 0.00 0.00 0.00 0.00 0.00 00:07:37.221 00:07:38.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.596 Nvme0n1 : 5.00 15127.60 59.09 0.00 0.00 0.00 0.00 0.00 00:07:38.596 [2024-11-20T07:58:15.625Z] =================================================================================================================== 00:07:38.596 [2024-11-20T07:58:15.625Z] Total : 15127.60 59.09 0.00 0.00 0.00 0.00 0.00 00:07:38.596 00:07:39.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.529 Nvme0n1 : 6.00 15209.83 59.41 0.00 0.00 0.00 0.00 0.00 00:07:39.529 [2024-11-20T07:58:16.558Z] =================================================================================================================== 00:07:39.529 [2024-11-20T07:58:16.558Z] Total : 15209.83 59.41 0.00 0.00 0.00 0.00 0.00 00:07:39.529 00:07:40.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.462 Nvme0n1 : 7.00 15250.43 59.57 0.00 0.00 0.00 0.00 0.00 00:07:40.462 [2024-11-20T07:58:17.491Z] =================================================================================================================== 00:07:40.462 [2024-11-20T07:58:17.491Z] Total : 15250.43 59.57 0.00 0.00 0.00 0.00 0.00 00:07:40.462 00:07:41.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.396 Nvme0n1 : 8.00 15296.75 59.75 0.00 0.00 0.00 0.00 0.00 00:07:41.396 [2024-11-20T07:58:18.425Z] =================================================================================================================== 00:07:41.396 [2024-11-20T07:58:18.426Z] Total : 15296.75 59.75 0.00 0.00 0.00 0.00 0.00 00:07:41.397 00:07:42.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.341 Nvme0n1 : 9.00 15325.78 59.87 0.00 0.00 0.00 0.00 0.00 00:07:42.341 [2024-11-20T07:58:19.370Z] =================================================================================================================== 00:07:42.341 [2024-11-20T07:58:19.370Z] Total : 15325.78 59.87 0.00 0.00 0.00 0.00 0.00 00:07:42.341 00:07:43.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.275 Nvme0n1 : 10.00 15339.60 59.92 0.00 0.00 0.00 0.00 0.00 00:07:43.275 [2024-11-20T07:58:20.304Z] =================================================================================================================== 00:07:43.275 [2024-11-20T07:58:20.304Z] Total : 15339.60 59.92 0.00 0.00 0.00 0.00 0.00 00:07:43.275 00:07:43.275 00:07:43.275 Latency(us) 00:07:43.275 [2024-11-20T07:58:20.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.275 Nvme0n1 : 10.01 15342.09 59.93 0.00 0.00 8338.75 3543.80 16602.45 00:07:43.275 [2024-11-20T07:58:20.304Z] =================================================================================================================== 00:07:43.275 [2024-11-20T07:58:20.304Z] Total : 15342.09 59.93 0.00 0.00 8338.75 3543.80 16602.45 00:07:43.275 { 00:07:43.275 "results": [ 00:07:43.275 { 00:07:43.275 "job": "Nvme0n1", 00:07:43.275 "core_mask": "0x2", 00:07:43.275 "workload": "randwrite", 00:07:43.275 "status": "finished", 00:07:43.275 "queue_depth": 128, 00:07:43.275 "io_size": 4096, 00:07:43.275 "runtime": 10.006723, 00:07:43.275 "iops": 15342.085515907655, 00:07:43.275 "mibps": 59.930021546514276, 00:07:43.275 "io_failed": 0, 00:07:43.275 "io_timeout": 0, 00:07:43.275 "avg_latency_us": 8338.745438049498, 00:07:43.275 "min_latency_us": 3543.7985185185184, 00:07:43.275 "max_latency_us": 16602.453333333335 00:07:43.275 } 00:07:43.275 ], 00:07:43.275 "core_count": 1 00:07:43.275 } 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3233207 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3233207 ']' 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3233207 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3233207 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3233207' 00:07:43.275 killing process with pid 3233207 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3233207 00:07:43.275 Received shutdown signal, test time was about 10.000000 seconds 00:07:43.275 00:07:43.275 Latency(us) 00:07:43.275 [2024-11-20T07:58:20.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.275 [2024-11-20T07:58:20.304Z] =================================================================================================================== 00:07:43.275 [2024-11-20T07:58:20.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:43.275 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3233207 00:07:43.533 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.791 08:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.049 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:44.049 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:44.380 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:44.380 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:44.380 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:44.638 [2024-11-20 08:58:21.560117] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:44.638 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:44.896 request: 00:07:44.896 { 00:07:44.896 "uuid": "dca13a9e-dba5-4799-9aba-2a2982c311cd", 00:07:44.896 "method": "bdev_lvol_get_lvstores", 00:07:44.896 "req_id": 1 00:07:44.896 } 00:07:44.896 Got JSON-RPC error response 00:07:44.896 response: 00:07:44.896 { 00:07:44.896 "code": -19, 00:07:44.896 "message": "No such device" 00:07:44.896 } 00:07:44.896 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:44.896 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.896 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:44.896 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.897 08:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.155 aio_bdev 00:07:45.155 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8b12b09e-4b6b-432b-a246-3c735018ab5e 00:07:45.155 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=8b12b09e-4b6b-432b-a246-3c735018ab5e 00:07:45.155 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:45.155 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:45.155 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:45.155 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:45.155 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:45.444 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8b12b09e-4b6b-432b-a246-3c735018ab5e -t 2000 00:07:45.728 [ 00:07:45.728 { 00:07:45.728 "name": "8b12b09e-4b6b-432b-a246-3c735018ab5e", 00:07:45.728 "aliases": [ 00:07:45.728 "lvs/lvol" 00:07:45.728 ], 00:07:45.728 "product_name": "Logical Volume", 00:07:45.728 "block_size": 4096, 00:07:45.728 "num_blocks": 38912, 00:07:45.728 "uuid": "8b12b09e-4b6b-432b-a246-3c735018ab5e", 00:07:45.728 "assigned_rate_limits": { 00:07:45.728 "rw_ios_per_sec": 0, 00:07:45.728 "rw_mbytes_per_sec": 0, 00:07:45.728 "r_mbytes_per_sec": 0, 00:07:45.728 "w_mbytes_per_sec": 0 00:07:45.728 }, 00:07:45.728 "claimed": false, 00:07:45.728 "zoned": false, 00:07:45.728 "supported_io_types": { 00:07:45.728 "read": true, 00:07:45.728 "write": true, 00:07:45.728 "unmap": true, 00:07:45.728 "flush": false, 00:07:45.728 "reset": true, 00:07:45.728 "nvme_admin": false, 00:07:45.728 "nvme_io": false, 00:07:45.728 "nvme_io_md": false, 00:07:45.728 "write_zeroes": true, 00:07:45.728 "zcopy": false, 00:07:45.728 "get_zone_info": false, 00:07:45.728 "zone_management": false, 00:07:45.728 "zone_append": false, 00:07:45.728 "compare": false, 00:07:45.728 "compare_and_write": false, 00:07:45.728 "abort": false, 00:07:45.728 "seek_hole": true, 00:07:45.728 "seek_data": true, 00:07:45.728 "copy": false, 00:07:45.728 "nvme_iov_md": false 00:07:45.728 }, 00:07:45.728 "driver_specific": { 00:07:45.728 "lvol": { 00:07:45.728 "lvol_store_uuid": "dca13a9e-dba5-4799-9aba-2a2982c311cd", 00:07:45.728 "base_bdev": "aio_bdev", 00:07:45.728 "thin_provision": false, 00:07:45.728 "num_allocated_clusters": 38, 00:07:45.728 "snapshot": false, 00:07:45.728 "clone": false, 00:07:45.728 "esnap_clone": false 00:07:45.728 } 00:07:45.728 } 00:07:45.728 } 00:07:45.728 ] 00:07:45.728 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:45.728 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:45.728 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:45.986 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:45.986 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:45.986 08:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:46.244 08:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:46.244 08:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8b12b09e-4b6b-432b-a246-3c735018ab5e 00:07:46.501 08:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dca13a9e-dba5-4799-9aba-2a2982c311cd 00:07:46.759 08:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.017 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.275 00:07:47.275 real 0m17.787s 00:07:47.275 user 0m17.405s 00:07:47.275 sys 0m1.921s 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:47.275 ************************************ 00:07:47.275 END TEST lvs_grow_clean 00:07:47.275 ************************************ 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.275 ************************************ 00:07:47.275 START TEST lvs_grow_dirty 00:07:47.275 ************************************ 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.275 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.533 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:47.533 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:47.791 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:07:47.791 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:07:47.791 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:48.049 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:48.049 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:48.049 08:58:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d lvol 150 00:07:48.307 08:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a0a6d6aa-7157-4525-994d-e9d303e7f2db 00:07:48.307 08:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:48.307 08:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:48.565 [2024-11-20 08:58:25.484788] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:48.565 [2024-11-20 08:58:25.484890] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:48.565 true 00:07:48.565 08:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:07:48.565 08:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:48.823 08:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:48.823 08:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.080 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a0a6d6aa-7157-4525-994d-e9d303e7f2db 00:07:49.337 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:49.595 [2024-11-20 08:58:26.580047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.595 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3235320 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3235320 /var/tmp/bdevperf.sock 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3235320 ']' 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:49.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.853 08:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:50.112 [2024-11-20 08:58:26.915907] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:07:50.112 [2024-11-20 08:58:26.915991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235320 ] 00:07:50.112 [2024-11-20 08:58:26.983545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.112 [2024-11-20 08:58:27.040397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.370 08:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.370 08:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:50.370 08:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:50.627 Nvme0n1 00:07:50.627 08:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:50.888 [ 00:07:50.888 { 00:07:50.888 "name": "Nvme0n1", 00:07:50.888 "aliases": [ 00:07:50.888 "a0a6d6aa-7157-4525-994d-e9d303e7f2db" 00:07:50.888 ], 00:07:50.888 "product_name": "NVMe disk", 00:07:50.888 "block_size": 4096, 00:07:50.888 "num_blocks": 38912, 00:07:50.888 "uuid": "a0a6d6aa-7157-4525-994d-e9d303e7f2db", 00:07:50.888 "numa_id": 0, 00:07:50.888 "assigned_rate_limits": { 00:07:50.888 "rw_ios_per_sec": 0, 00:07:50.888 "rw_mbytes_per_sec": 0, 00:07:50.888 "r_mbytes_per_sec": 0, 00:07:50.888 "w_mbytes_per_sec": 0 00:07:50.888 }, 00:07:50.888 "claimed": false, 00:07:50.888 "zoned": false, 00:07:50.888 "supported_io_types": { 00:07:50.888 "read": true, 00:07:50.888 "write": true, 00:07:50.888 "unmap": true, 00:07:50.888 "flush": true, 00:07:50.888 "reset": true, 00:07:50.888 "nvme_admin": true, 00:07:50.888 "nvme_io": true, 00:07:50.888 "nvme_io_md": false, 00:07:50.888 "write_zeroes": true, 00:07:50.888 "zcopy": false, 00:07:50.888 "get_zone_info": false, 00:07:50.888 "zone_management": false, 00:07:50.888 "zone_append": false, 00:07:50.888 "compare": true, 00:07:50.888 "compare_and_write": true, 00:07:50.888 "abort": true, 00:07:50.888 "seek_hole": false, 00:07:50.888 "seek_data": false, 00:07:50.888 "copy": true, 00:07:50.888 "nvme_iov_md": false 00:07:50.888 }, 00:07:50.888 "memory_domains": [ 00:07:50.888 { 00:07:50.888 "dma_device_id": "system", 00:07:50.888 "dma_device_type": 1 00:07:50.888 } 00:07:50.888 ], 00:07:50.888 "driver_specific": { 00:07:50.888 "nvme": [ 00:07:50.888 { 00:07:50.888 "trid": { 00:07:50.888 "trtype": "TCP", 00:07:50.888 "adrfam": "IPv4", 00:07:50.888 "traddr": "10.0.0.2", 00:07:50.888 "trsvcid": "4420", 00:07:50.888 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:50.888 }, 00:07:50.888 "ctrlr_data": { 00:07:50.888 "cntlid": 1, 00:07:50.888 "vendor_id": "0x8086", 00:07:50.888 "model_number": "SPDK bdev Controller", 00:07:50.889 "serial_number": "SPDK0", 00:07:50.889 "firmware_revision": "25.01", 00:07:50.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.889 "oacs": { 00:07:50.889 "security": 0, 00:07:50.889 "format": 0, 00:07:50.889 "firmware": 0, 00:07:50.889 "ns_manage": 0 00:07:50.889 }, 00:07:50.889 "multi_ctrlr": true, 00:07:50.889 "ana_reporting": false 00:07:50.889 }, 00:07:50.889 "vs": { 00:07:50.889 "nvme_version": "1.3" 00:07:50.889 }, 00:07:50.889 "ns_data": { 00:07:50.889 "id": 1, 00:07:50.889 "can_share": true 00:07:50.889 } 00:07:50.889 } 00:07:50.889 ], 00:07:50.889 "mp_policy": "active_passive" 00:07:50.889 } 00:07:50.889 } 00:07:50.889 ] 00:07:50.889 08:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3235417 00:07:50.889 08:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:50.889 08:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:51.147 Running I/O for 10 seconds... 00:07:52.082 Latency(us) 00:07:52.082 [2024-11-20T07:58:29.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.082 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:52.082 [2024-11-20T07:58:29.111Z] =================================================================================================================== 00:07:52.082 [2024-11-20T07:58:29.111Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:52.082 00:07:53.014 08:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:07:53.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.014 Nvme0n1 : 2.00 15051.50 58.79 0.00 0.00 0.00 0.00 0.00 00:07:53.014 [2024-11-20T07:58:30.043Z] =================================================================================================================== 00:07:53.014 [2024-11-20T07:58:30.043Z] Total : 15051.50 58.79 0.00 0.00 0.00 0.00 0.00 00:07:53.014 00:07:53.272 true 00:07:53.272 08:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:07:53.272 08:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:53.530 08:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:53.530 08:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:53.530 08:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3235417 00:07:54.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.097 Nvme0n1 : 3.00 15114.33 59.04 0.00 0.00 0.00 0.00 0.00 00:07:54.097 [2024-11-20T07:58:31.126Z] =================================================================================================================== 00:07:54.097 [2024-11-20T07:58:31.126Z] Total : 15114.33 59.04 0.00 0.00 0.00 0.00 0.00 00:07:54.097 00:07:55.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.031 Nvme0n1 : 4.00 15225.50 59.47 0.00 0.00 0.00 0.00 0.00 00:07:55.031 [2024-11-20T07:58:32.060Z] =================================================================================================================== 00:07:55.031 [2024-11-20T07:58:32.060Z] Total : 15225.50 59.47 0.00 0.00 0.00 0.00 0.00 00:07:55.031 00:07:55.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.967 Nvme0n1 : 5.00 15305.20 59.79 0.00 0.00 0.00 0.00 0.00 00:07:55.967 [2024-11-20T07:58:32.996Z] =================================================================================================================== 00:07:55.967 [2024-11-20T07:58:32.996Z] Total : 15305.20 59.79 0.00 0.00 0.00 0.00 0.00 00:07:55.967 00:07:57.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.340 Nvme0n1 : 6.00 15368.67 60.03 0.00 0.00 0.00 0.00 0.00 00:07:57.340 [2024-11-20T07:58:34.369Z] =================================================================================================================== 00:07:57.340 [2024-11-20T07:58:34.369Z] Total : 15368.67 60.03 0.00 0.00 0.00 0.00 0.00 00:07:57.340 00:07:58.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.277 Nvme0n1 : 7.00 15414.00 60.21 0.00 0.00 0.00 0.00 0.00 00:07:58.277 [2024-11-20T07:58:35.306Z] =================================================================================================================== 00:07:58.277 [2024-11-20T07:58:35.306Z] Total : 15414.00 60.21 0.00 0.00 0.00 0.00 0.00 00:07:58.277 00:07:59.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.211 Nvme0n1 : 8.00 15463.75 60.41 0.00 0.00 0.00 0.00 0.00 00:07:59.211 [2024-11-20T07:58:36.240Z] =================================================================================================================== 00:07:59.211 [2024-11-20T07:58:36.240Z] Total : 15463.75 60.41 0.00 0.00 0.00 0.00 0.00 00:07:59.211 00:08:00.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.146 Nvme0n1 : 9.00 15502.33 60.56 0.00 0.00 0.00 0.00 0.00 00:08:00.146 [2024-11-20T07:58:37.175Z] =================================================================================================================== 00:08:00.146 [2024-11-20T07:58:37.175Z] Total : 15502.33 60.56 0.00 0.00 0.00 0.00 0.00 00:08:00.146 00:08:01.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.082 Nvme0n1 : 10.00 15527.50 60.65 0.00 0.00 0.00 0.00 0.00 00:08:01.082 [2024-11-20T07:58:38.111Z] =================================================================================================================== 00:08:01.082 [2024-11-20T07:58:38.111Z] Total : 15527.50 60.65 0.00 0.00 0.00 0.00 0.00 00:08:01.082 00:08:01.082 00:08:01.082 Latency(us) 00:08:01.082 [2024-11-20T07:58:38.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.082 Nvme0n1 : 10.01 15530.52 60.67 0.00 0.00 8237.23 3373.89 16408.27 00:08:01.082 [2024-11-20T07:58:38.111Z] =================================================================================================================== 00:08:01.082 [2024-11-20T07:58:38.111Z] Total : 15530.52 60.67 0.00 0.00 8237.23 3373.89 16408.27 00:08:01.082 { 00:08:01.082 "results": [ 00:08:01.082 { 00:08:01.082 "job": "Nvme0n1", 00:08:01.082 "core_mask": "0x2", 00:08:01.082 "workload": "randwrite", 00:08:01.082 "status": "finished", 00:08:01.082 "queue_depth": 128, 00:08:01.082 "io_size": 4096, 00:08:01.082 "runtime": 10.006296, 00:08:01.082 "iops": 15530.521983359276, 00:08:01.082 "mibps": 60.66610149749717, 00:08:01.082 "io_failed": 0, 00:08:01.082 "io_timeout": 0, 00:08:01.082 "avg_latency_us": 8237.23291492776, 00:08:01.082 "min_latency_us": 3373.8903703703704, 00:08:01.082 "max_latency_us": 16408.27259259259 00:08:01.082 } 00:08:01.082 ], 00:08:01.082 "core_count": 1 00:08:01.082 } 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3235320 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3235320 ']' 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3235320 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3235320 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3235320' 00:08:01.082 killing process with pid 3235320 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3235320 00:08:01.082 Received shutdown signal, test time was about 10.000000 seconds 00:08:01.082 00:08:01.082 Latency(us) 00:08:01.082 [2024-11-20T07:58:38.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.082 [2024-11-20T07:58:38.111Z] =================================================================================================================== 00:08:01.082 [2024-11-20T07:58:38.111Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:01.082 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3235320 00:08:01.340 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:01.599 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:01.857 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:01.857 08:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3232761 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3232761 00:08:02.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3232761 Killed "${NVMF_APP[@]}" "$@" 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3236750 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3236750 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3236750 ']' 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:02.116 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.116 [2024-11-20 08:58:39.142676] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:02.116 [2024-11-20 08:58:39.142787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.377 [2024-11-20 08:58:39.217594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.377 [2024-11-20 08:58:39.275369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.377 [2024-11-20 08:58:39.275422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.377 [2024-11-20 08:58:39.275451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.377 [2024-11-20 08:58:39.275462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.377 [2024-11-20 08:58:39.275472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.377 [2024-11-20 08:58:39.276047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.377 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.377 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:02.377 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.377 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.377 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.635 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.635 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.894 [2024-11-20 08:58:39.667557] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:02.894 [2024-11-20 08:58:39.667703] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:02.894 [2024-11-20 08:58:39.667753] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:02.894 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:02.894 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a0a6d6aa-7157-4525-994d-e9d303e7f2db 00:08:02.894 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=a0a6d6aa-7157-4525-994d-e9d303e7f2db 00:08:02.894 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:02.894 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:02.894 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:02.894 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:02.894 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:03.153 08:58:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a0a6d6aa-7157-4525-994d-e9d303e7f2db -t 2000 00:08:03.412 [ 00:08:03.412 { 00:08:03.412 "name": "a0a6d6aa-7157-4525-994d-e9d303e7f2db", 00:08:03.412 "aliases": [ 00:08:03.412 "lvs/lvol" 00:08:03.412 ], 00:08:03.412 "product_name": "Logical Volume", 00:08:03.412 "block_size": 4096, 00:08:03.412 "num_blocks": 38912, 00:08:03.412 "uuid": "a0a6d6aa-7157-4525-994d-e9d303e7f2db", 00:08:03.412 "assigned_rate_limits": { 00:08:03.412 "rw_ios_per_sec": 0, 00:08:03.412 "rw_mbytes_per_sec": 0, 00:08:03.412 "r_mbytes_per_sec": 0, 00:08:03.412 "w_mbytes_per_sec": 0 00:08:03.412 }, 00:08:03.412 "claimed": false, 00:08:03.412 "zoned": false, 00:08:03.412 "supported_io_types": { 00:08:03.412 "read": true, 00:08:03.412 "write": true, 00:08:03.412 "unmap": true, 00:08:03.412 "flush": false, 00:08:03.412 "reset": true, 00:08:03.412 "nvme_admin": false, 00:08:03.412 "nvme_io": false, 00:08:03.412 "nvme_io_md": false, 00:08:03.412 "write_zeroes": true, 00:08:03.412 "zcopy": false, 00:08:03.412 "get_zone_info": false, 00:08:03.412 "zone_management": false, 00:08:03.412 "zone_append": false, 00:08:03.412 "compare": false, 00:08:03.412 "compare_and_write": false, 00:08:03.412 "abort": false, 00:08:03.412 "seek_hole": true, 00:08:03.412 "seek_data": true, 00:08:03.412 "copy": false, 00:08:03.412 "nvme_iov_md": false 00:08:03.412 }, 00:08:03.412 "driver_specific": { 00:08:03.412 "lvol": { 00:08:03.412 "lvol_store_uuid": "5f5cebb7-b57f-4aa4-ac83-d29307ba456d", 00:08:03.412 "base_bdev": "aio_bdev", 00:08:03.412 "thin_provision": false, 00:08:03.412 "num_allocated_clusters": 38, 00:08:03.412 "snapshot": false, 00:08:03.412 "clone": false, 00:08:03.412 "esnap_clone": false 00:08:03.412 } 00:08:03.412 } 00:08:03.412 } 00:08:03.412 ] 00:08:03.412 08:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:03.412 08:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:08:03.412 08:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:03.671 08:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:03.671 08:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:08:03.671 08:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:03.930 08:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:03.930 08:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:04.189 [2024-11-20 08:58:41.085217] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:04.189 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:08:04.448 request: 00:08:04.448 { 00:08:04.448 "uuid": "5f5cebb7-b57f-4aa4-ac83-d29307ba456d", 00:08:04.448 "method": "bdev_lvol_get_lvstores", 00:08:04.448 "req_id": 1 00:08:04.448 } 00:08:04.448 Got JSON-RPC error response 00:08:04.448 response: 00:08:04.448 { 00:08:04.448 "code": -19, 00:08:04.448 "message": "No such device" 00:08:04.448 } 00:08:04.448 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:04.448 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.448 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:04.449 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.449 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.707 aio_bdev 00:08:04.707 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a0a6d6aa-7157-4525-994d-e9d303e7f2db 00:08:04.707 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=a0a6d6aa-7157-4525-994d-e9d303e7f2db 00:08:04.707 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:04.707 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:04.707 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:04.707 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:04.707 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:04.965 08:58:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a0a6d6aa-7157-4525-994d-e9d303e7f2db -t 2000 00:08:05.224 [ 00:08:05.224 { 00:08:05.224 "name": "a0a6d6aa-7157-4525-994d-e9d303e7f2db", 00:08:05.224 "aliases": [ 00:08:05.224 "lvs/lvol" 00:08:05.224 ], 00:08:05.224 "product_name": "Logical Volume", 00:08:05.224 "block_size": 4096, 00:08:05.224 "num_blocks": 38912, 00:08:05.224 "uuid": "a0a6d6aa-7157-4525-994d-e9d303e7f2db", 00:08:05.224 "assigned_rate_limits": { 00:08:05.224 "rw_ios_per_sec": 0, 00:08:05.224 "rw_mbytes_per_sec": 0, 00:08:05.224 "r_mbytes_per_sec": 0, 00:08:05.224 "w_mbytes_per_sec": 0 00:08:05.224 }, 00:08:05.224 "claimed": false, 00:08:05.224 "zoned": false, 00:08:05.224 "supported_io_types": { 00:08:05.224 "read": true, 00:08:05.224 "write": true, 00:08:05.224 "unmap": true, 00:08:05.224 "flush": false, 00:08:05.224 "reset": true, 00:08:05.224 "nvme_admin": false, 00:08:05.224 "nvme_io": false, 00:08:05.224 "nvme_io_md": false, 00:08:05.224 "write_zeroes": true, 00:08:05.224 "zcopy": false, 00:08:05.224 "get_zone_info": false, 00:08:05.224 "zone_management": false, 00:08:05.224 "zone_append": false, 00:08:05.224 "compare": false, 00:08:05.224 "compare_and_write": false, 00:08:05.224 "abort": false, 00:08:05.224 "seek_hole": true, 00:08:05.224 "seek_data": true, 00:08:05.224 "copy": false, 00:08:05.224 "nvme_iov_md": false 00:08:05.224 }, 00:08:05.224 "driver_specific": { 00:08:05.224 "lvol": { 00:08:05.224 "lvol_store_uuid": "5f5cebb7-b57f-4aa4-ac83-d29307ba456d", 00:08:05.224 "base_bdev": "aio_bdev", 00:08:05.224 "thin_provision": false, 00:08:05.224 "num_allocated_clusters": 38, 00:08:05.224 "snapshot": false, 00:08:05.224 "clone": false, 00:08:05.224 "esnap_clone": false 00:08:05.224 } 00:08:05.224 } 00:08:05.224 } 00:08:05.224 ] 00:08:05.224 08:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:05.224 08:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:08:05.224 08:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:05.483 08:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:05.483 08:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:08:05.483 08:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:05.741 08:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:05.741 08:58:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a0a6d6aa-7157-4525-994d-e9d303e7f2db 00:08:06.000 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f5cebb7-b57f-4aa4-ac83-d29307ba456d 00:08:06.568 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:06.568 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.568 00:08:06.568 real 0m19.475s 00:08:06.568 user 0m49.417s 00:08:06.568 sys 0m4.562s 00:08:06.568 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.568 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.568 ************************************ 00:08:06.568 END TEST lvs_grow_dirty 00:08:06.568 ************************************ 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:06.826 nvmf_trace.0 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.826 rmmod nvme_tcp 00:08:06.826 rmmod nvme_fabrics 00:08:06.826 rmmod nvme_keyring 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3236750 ']' 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3236750 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3236750 ']' 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3236750 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3236750 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3236750' 00:08:06.826 killing process with pid 3236750 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3236750 00:08:06.826 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3236750 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.087 08:58:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.992 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.992 00:08:08.992 real 0m42.959s 00:08:08.992 user 1m12.985s 00:08:08.992 sys 0m8.603s 00:08:08.992 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.992 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.992 ************************************ 00:08:08.992 END TEST nvmf_lvs_grow 00:08:08.992 ************************************ 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.252 ************************************ 00:08:09.252 START TEST nvmf_bdev_io_wait 00:08:09.252 ************************************ 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:09.252 * Looking for test storage... 00:08:09.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.252 --rc genhtml_branch_coverage=1 00:08:09.252 --rc genhtml_function_coverage=1 00:08:09.252 --rc genhtml_legend=1 00:08:09.252 --rc geninfo_all_blocks=1 00:08:09.252 --rc geninfo_unexecuted_blocks=1 00:08:09.252 00:08:09.252 ' 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.252 --rc genhtml_branch_coverage=1 00:08:09.252 --rc genhtml_function_coverage=1 00:08:09.252 --rc genhtml_legend=1 00:08:09.252 --rc geninfo_all_blocks=1 00:08:09.252 --rc geninfo_unexecuted_blocks=1 00:08:09.252 00:08:09.252 ' 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.252 --rc genhtml_branch_coverage=1 00:08:09.252 --rc genhtml_function_coverage=1 00:08:09.252 --rc genhtml_legend=1 00:08:09.252 --rc geninfo_all_blocks=1 00:08:09.252 --rc geninfo_unexecuted_blocks=1 00:08:09.252 00:08:09.252 ' 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.252 --rc genhtml_branch_coverage=1 00:08:09.252 --rc genhtml_function_coverage=1 00:08:09.252 --rc genhtml_legend=1 00:08:09.252 --rc geninfo_all_blocks=1 00:08:09.252 --rc geninfo_unexecuted_blocks=1 00:08:09.252 00:08:09.252 ' 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.252 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.253 08:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.791 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:11.792 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:11.792 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:11.792 Found net devices under 0000:09:00.0: cvl_0_0 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:11.792 Found net devices under 0000:09:00.1: cvl_0_1 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:11.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:08:11.792 00:08:11.792 --- 10.0.0.2 ping statistics --- 00:08:11.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.792 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:11.792 00:08:11.792 --- 10.0.0.1 ping statistics --- 00:08:11.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.792 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3239413 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3239413 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3239413 ']' 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:11.792 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:11.793 [2024-11-20 08:58:48.666370] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:11.793 [2024-11-20 08:58:48.666444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.793 [2024-11-20 08:58:48.737052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.793 [2024-11-20 08:58:48.799095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.793 [2024-11-20 08:58:48.799151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.793 [2024-11-20 08:58:48.799178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.793 [2024-11-20 08:58:48.799190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.793 [2024-11-20 08:58:48.799199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.793 [2024-11-20 08:58:48.800938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.793 [2024-11-20 08:58:48.800996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.793 [2024-11-20 08:58:48.801065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.793 [2024-11-20 08:58:48.801068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.052 08:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.052 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.052 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.052 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.052 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.052 [2024-11-20 08:58:49.051907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.052 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.052 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:12.052 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.052 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.311 Malloc0 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:12.311 [2024-11-20 08:58:49.105822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3239436 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3239438 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.311 { 00:08:12.311 "params": { 00:08:12.311 "name": "Nvme$subsystem", 00:08:12.311 "trtype": "$TEST_TRANSPORT", 00:08:12.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.311 "adrfam": "ipv4", 00:08:12.311 "trsvcid": "$NVMF_PORT", 00:08:12.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.311 "hdgst": ${hdgst:-false}, 00:08:12.311 "ddgst": ${ddgst:-false} 00:08:12.311 }, 00:08:12.311 "method": "bdev_nvme_attach_controller" 00:08:12.311 } 00:08:12.311 EOF 00:08:12.311 )") 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3239440 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3239443 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.311 { 00:08:12.311 "params": { 00:08:12.311 "name": "Nvme$subsystem", 00:08:12.311 "trtype": "$TEST_TRANSPORT", 00:08:12.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.311 "adrfam": "ipv4", 00:08:12.311 "trsvcid": "$NVMF_PORT", 00:08:12.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.311 "hdgst": ${hdgst:-false}, 00:08:12.311 "ddgst": ${ddgst:-false} 00:08:12.311 }, 00:08:12.311 "method": "bdev_nvme_attach_controller" 00:08:12.311 } 00:08:12.311 EOF 00:08:12.311 )") 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.311 { 00:08:12.311 "params": { 00:08:12.311 "name": "Nvme$subsystem", 00:08:12.311 "trtype": "$TEST_TRANSPORT", 00:08:12.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.311 "adrfam": "ipv4", 00:08:12.311 "trsvcid": "$NVMF_PORT", 00:08:12.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.311 "hdgst": ${hdgst:-false}, 00:08:12.311 "ddgst": ${ddgst:-false} 00:08:12.311 }, 00:08:12.311 "method": "bdev_nvme_attach_controller" 00:08:12.311 } 00:08:12.311 EOF 00:08:12.311 )") 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.311 { 00:08:12.311 "params": { 00:08:12.311 "name": "Nvme$subsystem", 00:08:12.311 "trtype": "$TEST_TRANSPORT", 00:08:12.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.311 "adrfam": "ipv4", 00:08:12.311 "trsvcid": "$NVMF_PORT", 00:08:12.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.311 "hdgst": ${hdgst:-false}, 00:08:12.311 "ddgst": ${ddgst:-false} 00:08:12.311 }, 00:08:12.311 "method": "bdev_nvme_attach_controller" 00:08:12.311 } 00:08:12.311 EOF 00:08:12.311 )") 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3239436 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.311 "params": { 00:08:12.311 "name": "Nvme1", 00:08:12.311 "trtype": "tcp", 00:08:12.311 "traddr": "10.0.0.2", 00:08:12.311 "adrfam": "ipv4", 00:08:12.311 "trsvcid": "4420", 00:08:12.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.311 "hdgst": false, 00:08:12.311 "ddgst": false 00:08:12.311 }, 00:08:12.311 "method": "bdev_nvme_attach_controller" 00:08:12.311 }' 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.311 "params": { 00:08:12.311 "name": "Nvme1", 00:08:12.311 "trtype": "tcp", 00:08:12.311 "traddr": "10.0.0.2", 00:08:12.311 "adrfam": "ipv4", 00:08:12.311 "trsvcid": "4420", 00:08:12.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.311 "hdgst": false, 00:08:12.311 "ddgst": false 00:08:12.311 }, 00:08:12.311 "method": "bdev_nvme_attach_controller" 00:08:12.311 }' 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:12.311 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.311 "params": { 00:08:12.311 "name": "Nvme1", 00:08:12.311 "trtype": "tcp", 00:08:12.311 "traddr": "10.0.0.2", 00:08:12.311 "adrfam": "ipv4", 00:08:12.311 "trsvcid": "4420", 00:08:12.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.311 "hdgst": false, 00:08:12.311 "ddgst": false 00:08:12.312 }, 00:08:12.312 "method": "bdev_nvme_attach_controller" 00:08:12.312 }' 00:08:12.312 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:12.312 08:58:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.312 "params": { 00:08:12.312 "name": "Nvme1", 00:08:12.312 "trtype": "tcp", 00:08:12.312 "traddr": "10.0.0.2", 00:08:12.312 "adrfam": "ipv4", 00:08:12.312 "trsvcid": "4420", 00:08:12.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:12.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:12.312 "hdgst": false, 00:08:12.312 "ddgst": false 00:08:12.312 }, 00:08:12.312 "method": "bdev_nvme_attach_controller" 00:08:12.312 }' 00:08:12.312 [2024-11-20 08:58:49.158328] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:12.312 [2024-11-20 08:58:49.158326] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:12.312 [2024-11-20 08:58:49.158326] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:12.312 [2024-11-20 08:58:49.158369] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:12.312 [2024-11-20 08:58:49.158423] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 08:58:49.158423] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 08:58:49.158423] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:12.312 --proc-type=auto ] 00:08:12.312 --proc-type=auto ] 00:08:12.312 [2024-11-20 08:58:49.158440] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:12.570 [2024-11-20 08:58:49.345829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.570 [2024-11-20 08:58:49.401930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:12.570 [2024-11-20 08:58:49.451626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.570 [2024-11-20 08:58:49.510203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:12.570 [2024-11-20 08:58:49.560014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.827 [2024-11-20 08:58:49.618120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:12.827 [2024-11-20 08:58:49.635816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.827 [2024-11-20 08:58:49.689035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:12.827 Running I/O for 1 seconds... 00:08:12.827 Running I/O for 1 seconds... 00:08:13.085 Running I/O for 1 seconds... 00:08:13.085 Running I/O for 1 seconds... 00:08:14.019 10169.00 IOPS, 39.72 MiB/s 00:08:14.019 Latency(us) 00:08:14.019 [2024-11-20T07:58:51.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.019 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:14.019 Nvme1n1 : 1.01 10214.66 39.90 0.00 0.00 12476.85 6893.42 21456.97 00:08:14.020 [2024-11-20T07:58:51.049Z] =================================================================================================================== 00:08:14.020 [2024-11-20T07:58:51.049Z] Total : 10214.66 39.90 0.00 0.00 12476.85 6893.42 21456.97 00:08:14.020 8453.00 IOPS, 33.02 MiB/s 00:08:14.020 Latency(us) 00:08:14.020 [2024-11-20T07:58:51.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.020 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:14.020 Nvme1n1 : 1.01 8513.96 33.26 0.00 0.00 14964.74 6747.78 26602.76 00:08:14.020 [2024-11-20T07:58:51.049Z] =================================================================================================================== 00:08:14.020 [2024-11-20T07:58:51.049Z] Total : 8513.96 33.26 0.00 0.00 14964.74 6747.78 26602.76 00:08:14.020 9785.00 IOPS, 38.22 MiB/s 00:08:14.020 Latency(us) 00:08:14.020 [2024-11-20T07:58:51.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.020 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:14.020 Nvme1n1 : 1.01 9856.64 38.50 0.00 0.00 12938.62 4684.61 23787.14 00:08:14.020 [2024-11-20T07:58:51.049Z] =================================================================================================================== 00:08:14.020 [2024-11-20T07:58:51.049Z] Total : 9856.64 38.50 0.00 0.00 12938.62 4684.61 23787.14 00:08:14.020 182224.00 IOPS, 711.81 MiB/s 00:08:14.020 Latency(us) 00:08:14.020 [2024-11-20T07:58:51.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.020 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:14.020 Nvme1n1 : 1.00 181878.97 710.46 0.00 0.00 700.00 292.79 1893.26 00:08:14.020 [2024-11-20T07:58:51.049Z] =================================================================================================================== 00:08:14.020 [2024-11-20T07:58:51.049Z] Total : 181878.97 710.46 0.00 0.00 700.00 292.79 1893.26 00:08:14.020 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3239438 00:08:14.020 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3239440 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3239443 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.278 rmmod nvme_tcp 00:08:14.278 rmmod nvme_fabrics 00:08:14.278 rmmod nvme_keyring 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3239413 ']' 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3239413 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3239413 ']' 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3239413 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3239413 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3239413' 00:08:14.278 killing process with pid 3239413 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3239413 00:08:14.278 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3239413 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.538 08:58:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.451 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:16.451 00:08:16.451 real 0m7.402s 00:08:16.451 user 0m16.071s 00:08:16.451 sys 0m3.844s 00:08:16.451 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.451 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.451 ************************************ 00:08:16.451 END TEST nvmf_bdev_io_wait 00:08:16.451 ************************************ 00:08:16.743 08:58:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:16.743 08:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:16.743 08:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.743 08:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.743 ************************************ 00:08:16.743 START TEST nvmf_queue_depth 00:08:16.743 ************************************ 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:16.744 * Looking for test storage... 00:08:16.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:16.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.744 --rc genhtml_branch_coverage=1 00:08:16.744 --rc genhtml_function_coverage=1 00:08:16.744 --rc genhtml_legend=1 00:08:16.744 --rc geninfo_all_blocks=1 00:08:16.744 --rc geninfo_unexecuted_blocks=1 00:08:16.744 00:08:16.744 ' 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:16.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.744 --rc genhtml_branch_coverage=1 00:08:16.744 --rc genhtml_function_coverage=1 00:08:16.744 --rc genhtml_legend=1 00:08:16.744 --rc geninfo_all_blocks=1 00:08:16.744 --rc geninfo_unexecuted_blocks=1 00:08:16.744 00:08:16.744 ' 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:16.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.744 --rc genhtml_branch_coverage=1 00:08:16.744 --rc genhtml_function_coverage=1 00:08:16.744 --rc genhtml_legend=1 00:08:16.744 --rc geninfo_all_blocks=1 00:08:16.744 --rc geninfo_unexecuted_blocks=1 00:08:16.744 00:08:16.744 ' 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:16.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.744 --rc genhtml_branch_coverage=1 00:08:16.744 --rc genhtml_function_coverage=1 00:08:16.744 --rc genhtml_legend=1 00:08:16.744 --rc geninfo_all_blocks=1 00:08:16.744 --rc geninfo_unexecuted_blocks=1 00:08:16.744 00:08:16.744 ' 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.744 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:16.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:16.745 08:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.300 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:19.300 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:19.301 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:19.301 Found net devices under 0000:09:00.0: cvl_0_0 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:19.301 Found net devices under 0000:09:00.1: cvl_0_1 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:08:19.301 00:08:19.301 --- 10.0.0.2 ping statistics --- 00:08:19.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.301 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:08:19.301 08:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:08:19.301 00:08:19.301 --- 10.0.0.1 ping statistics --- 00:08:19.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.301 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:19.301 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.301 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:19.301 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.301 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.301 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3241681 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3241681 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3241681 ']' 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.302 [2024-11-20 08:58:56.081987] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:19.302 [2024-11-20 08:58:56.082083] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.302 [2024-11-20 08:58:56.157653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.302 [2024-11-20 08:58:56.211015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.302 [2024-11-20 08:58:56.211075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.302 [2024-11-20 08:58:56.211089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.302 [2024-11-20 08:58:56.211100] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.302 [2024-11-20 08:58:56.211124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.302 [2024-11-20 08:58:56.211783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:19.302 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.560 [2024-11-20 08:58:56.357230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.560 Malloc0 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.560 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.561 [2024-11-20 08:58:56.406432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3241816 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3241816 /var/tmp/bdevperf.sock 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3241816 ']' 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:19.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.561 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.561 [2024-11-20 08:58:56.452430] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:19.561 [2024-11-20 08:58:56.452507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241816 ] 00:08:19.561 [2024-11-20 08:58:56.518931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.561 [2024-11-20 08:58:56.578889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.820 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:19.820 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:19.820 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:19.820 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.820 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.079 NVMe0n1 00:08:20.079 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.079 08:58:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:20.079 Running I/O for 10 seconds... 00:08:22.386 8196.00 IOPS, 32.02 MiB/s [2024-11-20T07:59:00.347Z] 8677.50 IOPS, 33.90 MiB/s [2024-11-20T07:59:01.282Z] 8615.00 IOPS, 33.65 MiB/s [2024-11-20T07:59:02.215Z] 8688.25 IOPS, 33.94 MiB/s [2024-11-20T07:59:03.148Z] 8705.40 IOPS, 34.01 MiB/s [2024-11-20T07:59:04.082Z] 8727.67 IOPS, 34.09 MiB/s [2024-11-20T07:59:05.455Z] 8771.00 IOPS, 34.26 MiB/s [2024-11-20T07:59:06.390Z] 8816.50 IOPS, 34.44 MiB/s [2024-11-20T07:59:07.325Z] 8832.11 IOPS, 34.50 MiB/s [2024-11-20T07:59:07.325Z] 8818.60 IOPS, 34.45 MiB/s 00:08:30.296 Latency(us) 00:08:30.296 [2024-11-20T07:59:07.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.296 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:30.296 Verification LBA range: start 0x0 length 0x4000 00:08:30.296 NVMe0n1 : 10.07 8857.58 34.60 0.00 0.00 115104.48 15437.37 73400.32 00:08:30.296 [2024-11-20T07:59:07.325Z] =================================================================================================================== 00:08:30.296 [2024-11-20T07:59:07.325Z] Total : 8857.58 34.60 0.00 0.00 115104.48 15437.37 73400.32 00:08:30.296 { 00:08:30.296 "results": [ 00:08:30.296 { 00:08:30.296 "job": "NVMe0n1", 00:08:30.296 "core_mask": "0x1", 00:08:30.296 "workload": "verify", 00:08:30.296 "status": "finished", 00:08:30.296 "verify_range": { 00:08:30.296 "start": 0, 00:08:30.296 "length": 16384 00:08:30.296 }, 00:08:30.296 "queue_depth": 1024, 00:08:30.296 "io_size": 4096, 00:08:30.296 "runtime": 10.071605, 00:08:30.296 "iops": 8857.575331836386, 00:08:30.296 "mibps": 34.59990363998588, 00:08:30.296 "io_failed": 0, 00:08:30.296 "io_timeout": 0, 00:08:30.296 "avg_latency_us": 115104.48133849801, 00:08:30.296 "min_latency_us": 15437.368888888888, 00:08:30.296 "max_latency_us": 73400.32 00:08:30.296 } 00:08:30.296 ], 00:08:30.296 "core_count": 1 00:08:30.296 } 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3241816 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3241816 ']' 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3241816 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3241816 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3241816' 00:08:30.296 killing process with pid 3241816 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3241816 00:08:30.296 Received shutdown signal, test time was about 10.000000 seconds 00:08:30.296 00:08:30.296 Latency(us) 00:08:30.296 [2024-11-20T07:59:07.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.296 [2024-11-20T07:59:07.325Z] =================================================================================================================== 00:08:30.296 [2024-11-20T07:59:07.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:30.296 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3241816 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.576 rmmod nvme_tcp 00:08:30.576 rmmod nvme_fabrics 00:08:30.576 rmmod nvme_keyring 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3241681 ']' 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3241681 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3241681 ']' 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3241681 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3241681 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3241681' 00:08:30.576 killing process with pid 3241681 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3241681 00:08:30.576 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3241681 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.839 08:59:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:33.378 00:08:33.378 real 0m16.270s 00:08:33.378 user 0m22.817s 00:08:33.378 sys 0m3.140s 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.378 ************************************ 00:08:33.378 END TEST nvmf_queue_depth 00:08:33.378 ************************************ 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.378 ************************************ 00:08:33.378 START TEST nvmf_target_multipath 00:08:33.378 ************************************ 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:33.378 * Looking for test storage... 00:08:33.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:33.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.378 --rc genhtml_branch_coverage=1 00:08:33.378 --rc genhtml_function_coverage=1 00:08:33.378 --rc genhtml_legend=1 00:08:33.378 --rc geninfo_all_blocks=1 00:08:33.378 --rc geninfo_unexecuted_blocks=1 00:08:33.378 00:08:33.378 ' 00:08:33.378 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:33.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.378 --rc genhtml_branch_coverage=1 00:08:33.378 --rc genhtml_function_coverage=1 00:08:33.378 --rc genhtml_legend=1 00:08:33.378 --rc geninfo_all_blocks=1 00:08:33.378 --rc geninfo_unexecuted_blocks=1 00:08:33.378 00:08:33.378 ' 00:08:33.379 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:33.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.379 --rc genhtml_branch_coverage=1 00:08:33.379 --rc genhtml_function_coverage=1 00:08:33.379 --rc genhtml_legend=1 00:08:33.379 --rc geninfo_all_blocks=1 00:08:33.379 --rc geninfo_unexecuted_blocks=1 00:08:33.379 00:08:33.379 ' 00:08:33.379 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:33.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.379 --rc genhtml_branch_coverage=1 00:08:33.379 --rc genhtml_function_coverage=1 00:08:33.379 --rc genhtml_legend=1 00:08:33.379 --rc geninfo_all_blocks=1 00:08:33.379 --rc geninfo_unexecuted_blocks=1 00:08:33.379 00:08:33.379 ' 00:08:33.379 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.379 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:33.379 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.379 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.379 08:59:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:33.379 08:59:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.301 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:35.302 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:35.302 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:35.302 Found net devices under 0000:09:00.0: cvl_0_0 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:35.302 Found net devices under 0000:09:00.1: cvl_0_1 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:35.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:08:35.302 00:08:35.302 --- 10.0.0.2 ping statistics --- 00:08:35.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.302 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:08:35.302 00:08:35.302 --- 10.0.0.1 ping statistics --- 00:08:35.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.302 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:35.302 only one NIC for nvmf test 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.302 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.302 rmmod nvme_tcp 00:08:35.302 rmmod nvme_fabrics 00:08:35.302 rmmod nvme_keyring 00:08:35.560 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.560 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:35.560 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:35.560 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:35.560 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.561 08:59:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.466 00:08:37.466 real 0m4.570s 00:08:37.466 user 0m0.941s 00:08:37.466 sys 0m1.642s 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:37.466 ************************************ 00:08:37.466 END TEST nvmf_target_multipath 00:08:37.466 ************************************ 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.466 ************************************ 00:08:37.466 START TEST nvmf_zcopy 00:08:37.466 ************************************ 00:08:37.466 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:37.725 * Looking for test storage... 00:08:37.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:37.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.725 --rc genhtml_branch_coverage=1 00:08:37.725 --rc genhtml_function_coverage=1 00:08:37.725 --rc genhtml_legend=1 00:08:37.725 --rc geninfo_all_blocks=1 00:08:37.725 --rc geninfo_unexecuted_blocks=1 00:08:37.725 00:08:37.725 ' 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:37.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.725 --rc genhtml_branch_coverage=1 00:08:37.725 --rc genhtml_function_coverage=1 00:08:37.725 --rc genhtml_legend=1 00:08:37.725 --rc geninfo_all_blocks=1 00:08:37.725 --rc geninfo_unexecuted_blocks=1 00:08:37.725 00:08:37.725 ' 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:37.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.725 --rc genhtml_branch_coverage=1 00:08:37.725 --rc genhtml_function_coverage=1 00:08:37.725 --rc genhtml_legend=1 00:08:37.725 --rc geninfo_all_blocks=1 00:08:37.725 --rc geninfo_unexecuted_blocks=1 00:08:37.725 00:08:37.725 ' 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:37.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.725 --rc genhtml_branch_coverage=1 00:08:37.725 --rc genhtml_function_coverage=1 00:08:37.725 --rc genhtml_legend=1 00:08:37.725 --rc geninfo_all_blocks=1 00:08:37.725 --rc geninfo_unexecuted_blocks=1 00:08:37.725 00:08:37.725 ' 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.725 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.726 08:59:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:40.258 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:40.258 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.258 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:40.258 Found net devices under 0000:09:00.0: cvl_0_0 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:40.259 Found net devices under 0000:09:00.1: cvl_0_1 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:08:40.259 00:08:40.259 --- 10.0.0.2 ping statistics --- 00:08:40.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.259 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:08:40.259 00:08:40.259 --- 10.0.0.1 ping statistics --- 00:08:40.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.259 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3247032 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3247032 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3247032 ']' 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.259 08:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.259 [2024-11-20 08:59:17.032082] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:40.259 [2024-11-20 08:59:17.032176] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.259 [2024-11-20 08:59:17.102015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.259 [2024-11-20 08:59:17.154809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.259 [2024-11-20 08:59:17.154866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.259 [2024-11-20 08:59:17.154894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.259 [2024-11-20 08:59:17.154905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.259 [2024-11-20 08:59:17.154915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.259 [2024-11-20 08:59:17.155501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.259 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:40.259 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:40.259 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.259 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.259 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.517 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.517 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:40.517 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:40.517 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.517 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.517 [2024-11-20 08:59:17.299820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.518 [2024-11-20 08:59:17.315974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.518 malloc0 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.518 { 00:08:40.518 "params": { 00:08:40.518 "name": "Nvme$subsystem", 00:08:40.518 "trtype": "$TEST_TRANSPORT", 00:08:40.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.518 "adrfam": "ipv4", 00:08:40.518 "trsvcid": "$NVMF_PORT", 00:08:40.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.518 "hdgst": ${hdgst:-false}, 00:08:40.518 "ddgst": ${ddgst:-false} 00:08:40.518 }, 00:08:40.518 "method": "bdev_nvme_attach_controller" 00:08:40.518 } 00:08:40.518 EOF 00:08:40.518 )") 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:40.518 08:59:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:40.518 "params": { 00:08:40.518 "name": "Nvme1", 00:08:40.518 "trtype": "tcp", 00:08:40.518 "traddr": "10.0.0.2", 00:08:40.518 "adrfam": "ipv4", 00:08:40.518 "trsvcid": "4420", 00:08:40.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:40.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:40.518 "hdgst": false, 00:08:40.518 "ddgst": false 00:08:40.518 }, 00:08:40.518 "method": "bdev_nvme_attach_controller" 00:08:40.518 }' 00:08:40.518 [2024-11-20 08:59:17.401228] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:40.518 [2024-11-20 08:59:17.401336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247061 ] 00:08:40.518 [2024-11-20 08:59:17.468237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.518 [2024-11-20 08:59:17.530279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.083 Running I/O for 10 seconds... 00:08:42.948 5749.00 IOPS, 44.91 MiB/s [2024-11-20T07:59:20.910Z] 5810.00 IOPS, 45.39 MiB/s [2024-11-20T07:59:22.285Z] 5829.00 IOPS, 45.54 MiB/s [2024-11-20T07:59:23.217Z] 5842.75 IOPS, 45.65 MiB/s [2024-11-20T07:59:24.150Z] 5855.00 IOPS, 45.74 MiB/s [2024-11-20T07:59:25.082Z] 5858.67 IOPS, 45.77 MiB/s [2024-11-20T07:59:26.015Z] 5866.86 IOPS, 45.83 MiB/s [2024-11-20T07:59:26.948Z] 5866.00 IOPS, 45.83 MiB/s [2024-11-20T07:59:28.319Z] 5866.33 IOPS, 45.83 MiB/s [2024-11-20T07:59:28.319Z] 5872.10 IOPS, 45.88 MiB/s 00:08:51.290 Latency(us) 00:08:51.290 [2024-11-20T07:59:28.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.290 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:51.290 Verification LBA range: start 0x0 length 0x1000 00:08:51.290 Nvme1n1 : 10.06 5851.10 45.71 0.00 0.00 21730.62 4004.98 45632.47 00:08:51.290 [2024-11-20T07:59:28.319Z] =================================================================================================================== 00:08:51.290 [2024-11-20T07:59:28.319Z] Total : 5851.10 45.71 0.00 0.00 21730.62 4004.98 45632.47 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3248300 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:51.290 { 00:08:51.290 "params": { 00:08:51.290 "name": "Nvme$subsystem", 00:08:51.290 "trtype": "$TEST_TRANSPORT", 00:08:51.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.290 "adrfam": "ipv4", 00:08:51.290 "trsvcid": "$NVMF_PORT", 00:08:51.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.290 "hdgst": ${hdgst:-false}, 00:08:51.290 "ddgst": ${ddgst:-false} 00:08:51.290 }, 00:08:51.290 "method": "bdev_nvme_attach_controller" 00:08:51.290 } 00:08:51.290 EOF 00:08:51.290 )") 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:51.290 [2024-11-20 08:59:28.189752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.290 [2024-11-20 08:59:28.189793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:51.290 08:59:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:51.290 "params": { 00:08:51.290 "name": "Nvme1", 00:08:51.290 "trtype": "tcp", 00:08:51.290 "traddr": "10.0.0.2", 00:08:51.290 "adrfam": "ipv4", 00:08:51.290 "trsvcid": "4420", 00:08:51.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.290 "hdgst": false, 00:08:51.290 "ddgst": false 00:08:51.290 }, 00:08:51.290 "method": "bdev_nvme_attach_controller" 00:08:51.290 }' 00:08:51.290 [2024-11-20 08:59:28.197689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.290 [2024-11-20 08:59:28.197711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.290 [2024-11-20 08:59:28.205738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.290 [2024-11-20 08:59:28.205759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.290 [2024-11-20 08:59:28.213739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.290 [2024-11-20 08:59:28.213760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.290 [2024-11-20 08:59:28.221761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.290 [2024-11-20 08:59:28.221782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.290 [2024-11-20 08:59:28.228662] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:08:51.290 [2024-11-20 08:59:28.228732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3248300 ] 00:08:51.291 [2024-11-20 08:59:28.229780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.229800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.237800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.237820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.245823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.245853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.253845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.253866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.261867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.261886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.269890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.269910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.277912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.277931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.285935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.285954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.293957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.293976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.296965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.291 [2024-11-20 08:59:28.301984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.302005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.291 [2024-11-20 08:59:28.310038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.291 [2024-11-20 08:59:28.310081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.318043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.318067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.326050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.326070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.334069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.334089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.342092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.342112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.350112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.350131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.358136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.358155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.359839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.549 [2024-11-20 08:59:28.366159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.366178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.374189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.374211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.382233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.382265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.390258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.390312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.398280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.398336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.406327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.406361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.414363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.414399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.422376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.422411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.430358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.430380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.438408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.438443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.446427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.446462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.454451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.454488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.462437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.462459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.470458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.470480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.478481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.478502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.486514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.486541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.549 [2024-11-20 08:59:28.494550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.549 [2024-11-20 08:59:28.494574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.502555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.502592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.510592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.510615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.518620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.518657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.526636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.526672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.534657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.534679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.542664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.542685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.550685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.550705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.558721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.558741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.566747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.566770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.550 [2024-11-20 08:59:28.574756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.550 [2024-11-20 08:59:28.574778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.582779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.582814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.590801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.590820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.598822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.598842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.606848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.606869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.614870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.614892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.622896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.622916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.630916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.630950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.638937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.638956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.646960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.646980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.654983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.655003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.663004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.663024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.671031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.671054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 Running I/O for 5 seconds... 00:08:51.808 [2024-11-20 08:59:28.679051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.679072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.692760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.692804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.703201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.703230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.713961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.713990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.724763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.724792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.735807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.735835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.749251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.749279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.759802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.759830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.770411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.770440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.781195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.781223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.791923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.791962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.802605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.802633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.813129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.813157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.823621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.823649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.808 [2024-11-20 08:59:28.834240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.808 [2024-11-20 08:59:28.834268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.067 [2024-11-20 08:59:28.844945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.067 [2024-11-20 08:59:28.844990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.067 [2024-11-20 08:59:28.855536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.067 [2024-11-20 08:59:28.855565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.067 [2024-11-20 08:59:28.866106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.067 [2024-11-20 08:59:28.866148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.067 [2024-11-20 08:59:28.877025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.067 [2024-11-20 08:59:28.877068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.067 [2024-11-20 08:59:28.889426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.067 [2024-11-20 08:59:28.889458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.899750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.899778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.910510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.910538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.921419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.921446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.932003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.932031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.942669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.942697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.952854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.952882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.963118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.963146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.973933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.973961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.986569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.986597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:28.997049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:28.997099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:29.007535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:29.007564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:29.018111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:29.018139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:29.028872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:29.028901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:29.039703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:29.039731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:29.050338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:29.050365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:29.060692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:29.060720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:29.071215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:29.071244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.068 [2024-11-20 08:59:29.081872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.068 [2024-11-20 08:59:29.081901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.093028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.093056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.103670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.103699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.114278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.114315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.127396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.127425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.137938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.137967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.148630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.148658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.161317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.161344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.171241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.171270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.181900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.181928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.194580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.194608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.206586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.206621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.216096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.216124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.227634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.227662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.238170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.238198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.248517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.248545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.259072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.259100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.269577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.269606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.280014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.280043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.290522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.290550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.301089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.301118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.312142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.312171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.322583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.322612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.333637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.333666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.344203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.344232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.368 [2024-11-20 08:59:29.357114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.368 [2024-11-20 08:59:29.357143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.366949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.366979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.377462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.377491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.387605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.387633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.398014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.398043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.408290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.408335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.419317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.419355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.432272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.432299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.442478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.442506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.453037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.453065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.464149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.464176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.476881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.476908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.486584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.486612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.497196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.497224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.507556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.507584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.518262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.518290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.529033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.529062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.539836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.539864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.550349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.550377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.561038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.561066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.571863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.571892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.584579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.584608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.594495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.594523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.605717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.605746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.616175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.616211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.627005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.627032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.639444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.639472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.649979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.650007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.660746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.660775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.673760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.673789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 11841.00 IOPS, 92.51 MiB/s [2024-11-20T07:59:29.740Z] [2024-11-20 08:59:29.684047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.711 [2024-11-20 08:59:29.684075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.711 [2024-11-20 08:59:29.694575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.712 [2024-11-20 08:59:29.694603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.712 [2024-11-20 08:59:29.707366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.712 [2024-11-20 08:59:29.707394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.712 [2024-11-20 08:59:29.717679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.712 [2024-11-20 08:59:29.717706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.712 [2024-11-20 08:59:29.728276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.712 [2024-11-20 08:59:29.728312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.740347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.740375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.750365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.750392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.761545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.761574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.772017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.772045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.782849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.782878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.795692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.795720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.806126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.806154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.816879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.816907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.828957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.828985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.838701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.838729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.849252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.849281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.859699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.859741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.870344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.870372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.883857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.883886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.894210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.894238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.904435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.904463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.969 [2024-11-20 08:59:29.915294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.969 [2024-11-20 08:59:29.915331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.970 [2024-11-20 08:59:29.927874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.970 [2024-11-20 08:59:29.927902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.970 [2024-11-20 08:59:29.939741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.970 [2024-11-20 08:59:29.939769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.970 [2024-11-20 08:59:29.948830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.970 [2024-11-20 08:59:29.948859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.970 [2024-11-20 08:59:29.960227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.970 [2024-11-20 08:59:29.960255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.970 [2024-11-20 08:59:29.972834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.970 [2024-11-20 08:59:29.972862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.970 [2024-11-20 08:59:29.983089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.970 [2024-11-20 08:59:29.983116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.970 [2024-11-20 08:59:29.994005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.970 [2024-11-20 08:59:29.994033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.005249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.005277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.016289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.016332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.027522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.027552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.038135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.038163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.048695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.048723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.059862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.059891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.070674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.070703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.081604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.081633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.094368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.094397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.105246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.105274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.115991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.116020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.128268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.128297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.138133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.138160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.148709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.148736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.161334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.161363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.171206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.171234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.181669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.181698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.192019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.192048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.202486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.202514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.213019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.213048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.224222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.224251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.235217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.235253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.228 [2024-11-20 08:59:30.248672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.228 [2024-11-20 08:59:30.248700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.258993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.259022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.269691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.269720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.280727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.280755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.291262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.291291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.302428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.302457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.313396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.313424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.325705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.325733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.335111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.335139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.346104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.346132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.356939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.356967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.370558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.370586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.485 [2024-11-20 08:59:30.381176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.485 [2024-11-20 08:59:30.381205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.392148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.392177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.404988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.405018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.415373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.415402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.426094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.426123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.436919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.436965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.447859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.447894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.460589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.460617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.470921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.470950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.481564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.481594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.492250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.492279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.486 [2024-11-20 08:59:30.503026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.486 [2024-11-20 08:59:30.503055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.743 [2024-11-20 08:59:30.515608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.515637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.525982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.526011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.536539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.536568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.547233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.547261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.557330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.557358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.568255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.568283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.581205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.581232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.591383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.591411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.602064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.602092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.612679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.612707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.623223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.623251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.633910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.633939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.644454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.644483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.655136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.655173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.665591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.665620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.676257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.676285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 11861.50 IOPS, 92.67 MiB/s [2024-11-20T07:59:30.773Z] [2024-11-20 08:59:30.689169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.689212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.699372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.699399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.709879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.709907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.720615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.720642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.731768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.731795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.742486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.742514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.752992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.753020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.744 [2024-11-20 08:59:30.763353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.744 [2024-11-20 08:59:30.763381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.773750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.773777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.784637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.784664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.797177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.797205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.807580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.807608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.818017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.818045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.829068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.829096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.841886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.841914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.853945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.853973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.864217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.864246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.874825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.874853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.885262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.885289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.896034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.896061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.906890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.906918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.917731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.917759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.930517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.930546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.940322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.940351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.951484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.951511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.964163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.964192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.974605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.974633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.985451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.985479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:30.996091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:30.996119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:31.006967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:31.006994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.002 [2024-11-20 08:59:31.019564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.002 [2024-11-20 08:59:31.019592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.029844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.029887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.040741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.040769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.051721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.051749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.062603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.062631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.075595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.075623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.087589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.087617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.096507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.096535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.108546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.108575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.119166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.119195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.130255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.130283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.141084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.141112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.152372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.152400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.165186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.165214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.175539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.175567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.186193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.186221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.260 [2024-11-20 08:59:31.196608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.260 [2024-11-20 08:59:31.196639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-11-20 08:59:31.207702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-11-20 08:59:31.207733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-11-20 08:59:31.218558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-11-20 08:59:31.218586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-11-20 08:59:31.229232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-11-20 08:59:31.229260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-11-20 08:59:31.241995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-11-20 08:59:31.242022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-11-20 08:59:31.252273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-11-20 08:59:31.252300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-11-20 08:59:31.262620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-11-20 08:59:31.262648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-11-20 08:59:31.273201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-11-20 08:59:31.273229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.261 [2024-11-20 08:59:31.283990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.261 [2024-11-20 08:59:31.284018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.294884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.294912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.307565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.307592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.317445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.317473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.327961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.327988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.338676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.338704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.351415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.351444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.361679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.361706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.371994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.372021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.382672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.382700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.393287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.393324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.404190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.404218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.414843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.414870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.425806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.425834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.436107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.436135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.446917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.446944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.457557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.457585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.468157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.468184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.478726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.478753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.489455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.489483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.502045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.502072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.511646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.511690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.522067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.522095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.532640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.532670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.519 [2024-11-20 08:59:31.545324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.519 [2024-11-20 08:59:31.545353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.778 [2024-11-20 08:59:31.555523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.778 [2024-11-20 08:59:31.555552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.778 [2024-11-20 08:59:31.566183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.778 [2024-11-20 08:59:31.566211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.778 [2024-11-20 08:59:31.577141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.778 [2024-11-20 08:59:31.577169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.587984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.588012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.601071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.601105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.611338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.611366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.622014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.622043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.632756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.632784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.643335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.643363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.653769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.653797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.664271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.664298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.675192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.675220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.685941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.685976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 11857.00 IOPS, 92.63 MiB/s [2024-11-20T07:59:31.808Z] [2024-11-20 08:59:31.698630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.698658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.709066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.709094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.719444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.719471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.730133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.730176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.742711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.742740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.752451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.752479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.763439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.763467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.774066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.774095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.784546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.784575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.794773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.794802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:54.779 [2024-11-20 08:59:31.805571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:54.779 [2024-11-20 08:59:31.805599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.816615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.816642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.826822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.826850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.837549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.837578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.848424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.848453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.859485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.859513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.871934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.871963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.881689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.881717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.892050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.892090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.902377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.902405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.913024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.913052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.923466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.923494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.934368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.934396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.944794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.944821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.955446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.955475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.966024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.966052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.976667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.976695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:31.987498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:31.987525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:32.000428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:32.000456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:32.012402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:32.012430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:32.022153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:32.022196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:32.033821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:32.033864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:32.044536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:32.044564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.038 [2024-11-20 08:59:32.056459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.038 [2024-11-20 08:59:32.056487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.065201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.065228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.076242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.076270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.086391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.086419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.096934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.096970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.107922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.107951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.118783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.118827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.129609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.129640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.140195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.140224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.152911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.152939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.162977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.163005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.173526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.173556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.183845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.183872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.194280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.194316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.204867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.204895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.215721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.215750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.226154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.226181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.236799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.236826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.249651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.249679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.297 [2024-11-20 08:59:32.261797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.297 [2024-11-20 08:59:32.261825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.298 [2024-11-20 08:59:32.270729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.298 [2024-11-20 08:59:32.270757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.298 [2024-11-20 08:59:32.282310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.298 [2024-11-20 08:59:32.282338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.298 [2024-11-20 08:59:32.294701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.298 [2024-11-20 08:59:32.294729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.298 [2024-11-20 08:59:32.304208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.298 [2024-11-20 08:59:32.304236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.298 [2024-11-20 08:59:32.315040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.298 [2024-11-20 08:59:32.315068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.325859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.325887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.338422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.338451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.351275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.351312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.361525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.361553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.372295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.372329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.385343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.385371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.395444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.395472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.405565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.405593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.415833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.415860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.426227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.426255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.436887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.436915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.447203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.447231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.458219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.458246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.470890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.470918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.481366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.481394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.492145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.492172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.504603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.504631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.514759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.514787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.525045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.525073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.535923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.535952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.549182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.549211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.559495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.559523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.570074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.570102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.556 [2024-11-20 08:59:32.581371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.556 [2024-11-20 08:59:32.581399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.591985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.592014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.602780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.602808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.613674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.613703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.626177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.626222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.636569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.636598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.647149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.647178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.658175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.658203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.671341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.671374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.681722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.681751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 11884.75 IOPS, 92.85 MiB/s [2024-11-20T07:59:32.844Z] [2024-11-20 08:59:32.692263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.692293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.702762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.702791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.713319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.713355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.724135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.724164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.734738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.734766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.745412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.745441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.815 [2024-11-20 08:59:32.756227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.815 [2024-11-20 08:59:32.756256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.816 [2024-11-20 08:59:32.766988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.816 [2024-11-20 08:59:32.767017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.816 [2024-11-20 08:59:32.777986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.816 [2024-11-20 08:59:32.778014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.816 [2024-11-20 08:59:32.788754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.816 [2024-11-20 08:59:32.788782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.816 [2024-11-20 08:59:32.799882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.816 [2024-11-20 08:59:32.799910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.816 [2024-11-20 08:59:32.812570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.816 [2024-11-20 08:59:32.812607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.816 [2024-11-20 08:59:32.823338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.816 [2024-11-20 08:59:32.823366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:55.816 [2024-11-20 08:59:32.834170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:55.816 [2024-11-20 08:59:32.834213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.074 [2024-11-20 08:59:32.846516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.074 [2024-11-20 08:59:32.846545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.074 [2024-11-20 08:59:32.856322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.074 [2024-11-20 08:59:32.856350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.074 [2024-11-20 08:59:32.866880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.074 [2024-11-20 08:59:32.866908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.074 [2024-11-20 08:59:32.877707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.074 [2024-11-20 08:59:32.877735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.074 [2024-11-20 08:59:32.889819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.074 [2024-11-20 08:59:32.889847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.074 [2024-11-20 08:59:32.900121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.900148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:32.910685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.910712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:32.921344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.921379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:32.932401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.932428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:32.945490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.945518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:32.955788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.955816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:32.966440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.966468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:32.977161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.977188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:32.987641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.987669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:32.998381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:32.998409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:33.008822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:33.008850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:33.019479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:33.019507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:33.032498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:33.032527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:33.044473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:33.044500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:33.054612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:33.054640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:33.066049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:33.066076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:33.076635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:33.076663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:33.087236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:33.087264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.075 [2024-11-20 08:59:33.097984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.075 [2024-11-20 08:59:33.098012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.110215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.110243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.120178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.120205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.131228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.131280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.142243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.142273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.152579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.152607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.162948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.162976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.173655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.173683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.184540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.184568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.195300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.195336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.205803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.205831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.216468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.216496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.227416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.227444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.237998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.238026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.250644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.250673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.260910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.260938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.271466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.271493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.281946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.281974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.292648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.292675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.303250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.303277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.333 [2024-11-20 08:59:33.313650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.333 [2024-11-20 08:59:33.313677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.334 [2024-11-20 08:59:33.324290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.334 [2024-11-20 08:59:33.324327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.334 [2024-11-20 08:59:33.335191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.334 [2024-11-20 08:59:33.335241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.334 [2024-11-20 08:59:33.345608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.334 [2024-11-20 08:59:33.345635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.334 [2024-11-20 08:59:33.356129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.334 [2024-11-20 08:59:33.356156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.366923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.366951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.383844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.383877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.394241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.394270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.405338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.405367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.417940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.417968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.427566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.427594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.438574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.438601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.449359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.449387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.459962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.459990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.470830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.470864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.481633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.481661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.492467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.492495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.503074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.503104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.515655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.515683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.527228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.527255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.536674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.536702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.547822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.547858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.560145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.560173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.569355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.569383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.580846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.580874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.592 [2024-11-20 08:59:33.593197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.592 [2024-11-20 08:59:33.593226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.593 [2024-11-20 08:59:33.602605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.593 [2024-11-20 08:59:33.602632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.593 [2024-11-20 08:59:33.614125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.593 [2024-11-20 08:59:33.614153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.624917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.624945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.635466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.635494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.646446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.646475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.657496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.657524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.668143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.668171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.678958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.678985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.689775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.689804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 11878.20 IOPS, 92.80 MiB/s [2024-11-20T07:59:33.880Z] [2024-11-20 08:59:33.699408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.699435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 00:08:56.851 Latency(us) 00:08:56.851 [2024-11-20T07:59:33.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.851 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:56.851 Nvme1n1 : 5.01 11883.00 92.84 0.00 0.00 10759.05 4441.88 20680.25 00:08:56.851 [2024-11-20T07:59:33.880Z] =================================================================================================================== 00:08:56.851 [2024-11-20T07:59:33.880Z] Total : 11883.00 92.84 0.00 0.00 10759.05 4441.88 20680.25 00:08:56.851 [2024-11-20 08:59:33.705534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.705574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.713555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.713595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.721573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.721595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.729662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.729709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.737682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.737730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.745696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.745741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.753716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.753760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.761730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.761773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.769767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.769814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.777781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.777825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.785800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.785844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.793826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.793872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.801854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.801899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.809881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.809927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.817911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.817958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.825924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.825970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.833933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.833978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.841962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.842007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.849921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.849943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.857938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.857958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.865959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.865979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.851 [2024-11-20 08:59:33.873990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.851 [2024-11-20 08:59:33.874012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.110 [2024-11-20 08:59:33.882051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.110 [2024-11-20 08:59:33.882086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.110 [2024-11-20 08:59:33.890098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.110 [2024-11-20 08:59:33.890143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.110 [2024-11-20 08:59:33.898100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.110 [2024-11-20 08:59:33.898141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.110 [2024-11-20 08:59:33.906070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.110 [2024-11-20 08:59:33.906090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.110 [2024-11-20 08:59:33.914089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.110 [2024-11-20 08:59:33.914109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.110 [2024-11-20 08:59:33.922110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.110 [2024-11-20 08:59:33.922129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3248300) - No such process 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3248300 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.110 delay0 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.110 08:59:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:57.110 [2024-11-20 08:59:34.089405] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:03.664 Initializing NVMe Controllers 00:09:03.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:03.664 Initialization complete. Launching workers. 00:09:03.664 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 73 00:09:03.664 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 360, failed to submit 33 00:09:03.664 success 175, unsuccessful 185, failed 0 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.664 rmmod nvme_tcp 00:09:03.664 rmmod nvme_fabrics 00:09:03.664 rmmod nvme_keyring 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3247032 ']' 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3247032 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3247032 ']' 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3247032 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3247032 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3247032' 00:09:03.664 killing process with pid 3247032 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3247032 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3247032 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.664 08:59:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.569 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.569 00:09:05.569 real 0m28.126s 00:09:05.569 user 0m41.684s 00:09:05.569 sys 0m8.094s 00:09:05.569 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.569 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.569 ************************************ 00:09:05.569 END TEST nvmf_zcopy 00:09:05.569 ************************************ 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.828 ************************************ 00:09:05.828 START TEST nvmf_nmic 00:09:05.828 ************************************ 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:05.828 * Looking for test storage... 00:09:05.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.828 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.829 --rc genhtml_branch_coverage=1 00:09:05.829 --rc genhtml_function_coverage=1 00:09:05.829 --rc genhtml_legend=1 00:09:05.829 --rc geninfo_all_blocks=1 00:09:05.829 --rc geninfo_unexecuted_blocks=1 00:09:05.829 00:09:05.829 ' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.829 --rc genhtml_branch_coverage=1 00:09:05.829 --rc genhtml_function_coverage=1 00:09:05.829 --rc genhtml_legend=1 00:09:05.829 --rc geninfo_all_blocks=1 00:09:05.829 --rc geninfo_unexecuted_blocks=1 00:09:05.829 00:09:05.829 ' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.829 --rc genhtml_branch_coverage=1 00:09:05.829 --rc genhtml_function_coverage=1 00:09:05.829 --rc genhtml_legend=1 00:09:05.829 --rc geninfo_all_blocks=1 00:09:05.829 --rc geninfo_unexecuted_blocks=1 00:09:05.829 00:09:05.829 ' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.829 --rc genhtml_branch_coverage=1 00:09:05.829 --rc genhtml_function_coverage=1 00:09:05.829 --rc genhtml_legend=1 00:09:05.829 --rc geninfo_all_blocks=1 00:09:05.829 --rc geninfo_unexecuted_blocks=1 00:09:05.829 00:09:05.829 ' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.829 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.830 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.830 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.830 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.830 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.830 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.830 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:05.830 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:05.830 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.830 08:59:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:08.361 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:08.361 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:08.361 Found net devices under 0000:09:00.0: cvl_0_0 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:08.361 Found net devices under 0000:09:00.1: cvl_0_1 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.361 08:59:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.361 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:09:08.362 00:09:08.362 --- 10.0.0.2 ping statistics --- 00:09:08.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.362 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:09:08.362 00:09:08.362 --- 10.0.0.1 ping statistics --- 00:09:08.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.362 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3251679 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3251679 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3251679 ']' 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:08.362 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.362 [2024-11-20 08:59:45.199142] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:09:08.362 [2024-11-20 08:59:45.199233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.362 [2024-11-20 08:59:45.273697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.362 [2024-11-20 08:59:45.335795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.362 [2024-11-20 08:59:45.335851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.362 [2024-11-20 08:59:45.335865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.362 [2024-11-20 08:59:45.335876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.362 [2024-11-20 08:59:45.335899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.362 [2024-11-20 08:59:45.337403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.362 [2024-11-20 08:59:45.337456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.362 [2024-11-20 08:59:45.337505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.362 [2024-11-20 08:59:45.337508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.620 [2024-11-20 08:59:45.493624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.620 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.621 Malloc0 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.621 [2024-11-20 08:59:45.568854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:08.621 test case1: single bdev can't be used in multiple subsystems 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.621 [2024-11-20 08:59:45.592679] bdev.c:8273:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:08.621 [2024-11-20 08:59:45.592708] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:08.621 [2024-11-20 08:59:45.592739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.621 request: 00:09:08.621 { 00:09:08.621 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:08.621 "namespace": { 00:09:08.621 "bdev_name": "Malloc0", 00:09:08.621 "no_auto_visible": false 00:09:08.621 }, 00:09:08.621 "method": "nvmf_subsystem_add_ns", 00:09:08.621 "req_id": 1 00:09:08.621 } 00:09:08.621 Got JSON-RPC error response 00:09:08.621 response: 00:09:08.621 { 00:09:08.621 "code": -32602, 00:09:08.621 "message": "Invalid parameters" 00:09:08.621 } 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:08.621 Adding namespace failed - expected result. 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:08.621 test case2: host connect to nvmf target in multiple paths 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.621 [2024-11-20 08:59:45.600796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.621 08:59:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.554 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:10.119 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.119 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:10.119 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.119 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:10.119 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:12.018 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:12.018 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:12.018 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.018 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:12.018 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.018 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:12.018 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:12.018 [global] 00:09:12.018 thread=1 00:09:12.018 invalidate=1 00:09:12.018 rw=write 00:09:12.018 time_based=1 00:09:12.018 runtime=1 00:09:12.018 ioengine=libaio 00:09:12.018 direct=1 00:09:12.018 bs=4096 00:09:12.018 iodepth=1 00:09:12.018 norandommap=0 00:09:12.018 numjobs=1 00:09:12.018 00:09:12.018 verify_dump=1 00:09:12.018 verify_backlog=512 00:09:12.018 verify_state_save=0 00:09:12.018 do_verify=1 00:09:12.018 verify=crc32c-intel 00:09:12.018 [job0] 00:09:12.018 filename=/dev/nvme0n1 00:09:12.018 Could not set queue depth (nvme0n1) 00:09:12.279 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:12.279 fio-3.35 00:09:12.279 Starting 1 thread 00:09:13.653 00:09:13.653 job0: (groupid=0, jobs=1): err= 0: pid=3252312: Wed Nov 20 08:59:50 2024 00:09:13.653 read: IOPS=1671, BW=6685KiB/s (6846kB/s)(6692KiB/1001msec) 00:09:13.653 slat (nsec): min=6318, max=55679, avg=10191.85, stdev=5060.95 00:09:13.653 clat (usec): min=186, max=40882, avg=348.72, stdev=1974.64 00:09:13.653 lat (usec): min=194, max=40901, avg=358.91, stdev=1974.96 00:09:13.653 clat percentiles (usec): 00:09:13.653 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 227], 00:09:13.653 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:09:13.653 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 302], 00:09:13.653 | 99.00th=[ 433], 99.50th=[ 498], 99.90th=[40633], 99.95th=[40633], 00:09:13.653 | 99.99th=[40633] 00:09:13.653 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:13.653 slat (nsec): min=8091, max=63486, avg=14839.61, stdev=6995.06 00:09:13.653 clat (usec): min=129, max=404, avg=173.51, stdev=31.15 00:09:13.653 lat (usec): min=138, max=415, avg=188.35, stdev=35.35 00:09:13.653 clat percentiles (usec): 00:09:13.653 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:09:13.653 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 176], 00:09:13.653 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 217], 95.00th=[ 229], 00:09:13.653 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 363], 99.95th=[ 383], 00:09:13.653 | 99.99th=[ 404] 00:09:13.653 bw ( KiB/s): min= 8920, max= 8920, per=100.00%, avg=8920.00, stdev= 0.00, samples=1 00:09:13.653 iops : min= 2230, max= 2230, avg=2230.00, stdev= 0.00, samples=1 00:09:13.653 lat (usec) : 250=77.16%, 500=22.66%, 750=0.08% 00:09:13.653 lat (msec) : 50=0.11% 00:09:13.653 cpu : usr=4.40%, sys=5.60%, ctx=3721, majf=0, minf=1 00:09:13.653 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.653 issued rwts: total=1673,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.653 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.653 00:09:13.653 Run status group 0 (all jobs): 00:09:13.653 READ: bw=6685KiB/s (6846kB/s), 6685KiB/s-6685KiB/s (6846kB/s-6846kB/s), io=6692KiB (6853kB), run=1001-1001msec 00:09:13.653 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:13.653 00:09:13.653 Disk stats (read/write): 00:09:13.653 nvme0n1: ios=1646/2048, merge=0/0, ticks=496/316, in_queue=812, util=91.48% 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.653 rmmod nvme_tcp 00:09:13.653 rmmod nvme_fabrics 00:09:13.653 rmmod nvme_keyring 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3251679 ']' 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3251679 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3251679 ']' 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3251679 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3251679 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3251679' 00:09:13.653 killing process with pid 3251679 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3251679 00:09:13.653 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3251679 00:09:13.911 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:13.911 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:13.911 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:13.911 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:13.911 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:13.911 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:13.911 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:13.912 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.912 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.912 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.912 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.912 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.448 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.448 00:09:16.448 real 0m10.265s 00:09:16.448 user 0m23.049s 00:09:16.448 sys 0m2.587s 00:09:16.448 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:16.448 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.448 ************************************ 00:09:16.448 END TEST nvmf_nmic 00:09:16.448 ************************************ 00:09:16.448 08:59:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:16.448 08:59:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:16.448 08:59:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:16.448 08:59:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.448 ************************************ 00:09:16.448 START TEST nvmf_fio_target 00:09:16.448 ************************************ 00:09:16.448 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:16.448 * Looking for test storage... 00:09:16.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:16.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.448 --rc genhtml_branch_coverage=1 00:09:16.448 --rc genhtml_function_coverage=1 00:09:16.448 --rc genhtml_legend=1 00:09:16.448 --rc geninfo_all_blocks=1 00:09:16.448 --rc geninfo_unexecuted_blocks=1 00:09:16.448 00:09:16.448 ' 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:16.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.448 --rc genhtml_branch_coverage=1 00:09:16.448 --rc genhtml_function_coverage=1 00:09:16.448 --rc genhtml_legend=1 00:09:16.448 --rc geninfo_all_blocks=1 00:09:16.448 --rc geninfo_unexecuted_blocks=1 00:09:16.448 00:09:16.448 ' 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:16.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.448 --rc genhtml_branch_coverage=1 00:09:16.448 --rc genhtml_function_coverage=1 00:09:16.448 --rc genhtml_legend=1 00:09:16.448 --rc geninfo_all_blocks=1 00:09:16.448 --rc geninfo_unexecuted_blocks=1 00:09:16.448 00:09:16.448 ' 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:16.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.448 --rc genhtml_branch_coverage=1 00:09:16.448 --rc genhtml_function_coverage=1 00:09:16.448 --rc genhtml_legend=1 00:09:16.448 --rc geninfo_all_blocks=1 00:09:16.448 --rc geninfo_unexecuted_blocks=1 00:09:16.448 00:09:16.448 ' 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:16.448 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.449 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:18.353 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:18.353 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:18.353 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:18.354 Found net devices under 0000:09:00.0: cvl_0_0 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:18.354 Found net devices under 0000:09:00.1: cvl_0_1 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.354 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:09:18.613 00:09:18.613 --- 10.0.0.2 ping statistics --- 00:09:18.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.613 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:09:18.613 00:09:18.613 --- 10.0.0.1 ping statistics --- 00:09:18.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.613 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3254408 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3254408 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3254408 ']' 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:18.613 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.613 [2024-11-20 08:59:55.504218] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:09:18.613 [2024-11-20 08:59:55.504333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.613 [2024-11-20 08:59:55.577515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.613 [2024-11-20 08:59:55.637693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.613 [2024-11-20 08:59:55.637739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.613 [2024-11-20 08:59:55.637767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.614 [2024-11-20 08:59:55.637778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.614 [2024-11-20 08:59:55.637788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.614 [2024-11-20 08:59:55.639579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.614 [2024-11-20 08:59:55.639642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.614 [2024-11-20 08:59:55.639757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.614 [2024-11-20 08:59:55.639748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.871 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:18.871 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:18.871 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.871 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:18.871 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.871 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.871 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:19.128 [2024-11-20 08:59:56.120439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.128 08:59:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.694 08:59:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:19.694 08:59:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.951 08:59:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:19.951 08:59:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.210 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:20.210 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.467 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:20.467 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:20.725 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:20.983 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:20.983 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.241 08:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:21.241 08:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:21.498 08:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:21.498 08:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:21.756 08:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.013 08:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:22.013 08:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.271 08:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:22.271 08:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.836 08:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.836 [2024-11-20 08:59:59.811243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.836 08:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:23.093 09:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:23.659 09:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.224 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:24.224 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:24.224 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.224 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:24.224 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:24.224 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:26.123 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:26.123 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:26.123 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.123 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:26.123 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.123 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:26.123 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:26.123 [global] 00:09:26.123 thread=1 00:09:26.123 invalidate=1 00:09:26.123 rw=write 00:09:26.123 time_based=1 00:09:26.123 runtime=1 00:09:26.123 ioengine=libaio 00:09:26.123 direct=1 00:09:26.123 bs=4096 00:09:26.123 iodepth=1 00:09:26.123 norandommap=0 00:09:26.123 numjobs=1 00:09:26.123 00:09:26.123 verify_dump=1 00:09:26.123 verify_backlog=512 00:09:26.123 verify_state_save=0 00:09:26.123 do_verify=1 00:09:26.123 verify=crc32c-intel 00:09:26.123 [job0] 00:09:26.123 filename=/dev/nvme0n1 00:09:26.123 [job1] 00:09:26.123 filename=/dev/nvme0n2 00:09:26.123 [job2] 00:09:26.123 filename=/dev/nvme0n3 00:09:26.123 [job3] 00:09:26.123 filename=/dev/nvme0n4 00:09:26.381 Could not set queue depth (nvme0n1) 00:09:26.381 Could not set queue depth (nvme0n2) 00:09:26.381 Could not set queue depth (nvme0n3) 00:09:26.381 Could not set queue depth (nvme0n4) 00:09:26.381 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.381 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.381 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.381 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.381 fio-3.35 00:09:26.381 Starting 4 threads 00:09:27.754 00:09:27.754 job0: (groupid=0, jobs=1): err= 0: pid=3255644: Wed Nov 20 09:00:04 2024 00:09:27.754 read: IOPS=32, BW=130KiB/s (134kB/s)(132KiB/1012msec) 00:09:27.754 slat (nsec): min=7687, max=49904, avg=26870.91, stdev=10080.17 00:09:27.754 clat (usec): min=272, max=41993, avg=27769.38, stdev=19659.13 00:09:27.754 lat (usec): min=293, max=42015, avg=27796.25, stdev=19659.82 00:09:27.754 clat percentiles (usec): 00:09:27.754 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 338], 20.00th=[ 424], 00:09:27.754 | 30.00th=[ 529], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:27.754 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:27.754 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:27.754 | 99.99th=[42206] 00:09:27.754 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:09:27.754 slat (nsec): min=8307, max=24911, avg=9528.29, stdev=1524.32 00:09:27.754 clat (usec): min=146, max=296, avg=172.07, stdev=12.57 00:09:27.754 lat (usec): min=155, max=317, avg=181.60, stdev=13.07 00:09:27.754 clat percentiles (usec): 00:09:27.754 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:09:27.754 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:09:27.754 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:09:27.754 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 297], 99.95th=[ 297], 00:09:27.754 | 99.99th=[ 297] 00:09:27.754 bw ( KiB/s): min= 4096, max= 4096, per=25.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.754 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.754 lat (usec) : 250=93.76%, 500=1.83%, 750=0.37% 00:09:27.754 lat (msec) : 50=4.04% 00:09:27.754 cpu : usr=0.49%, sys=0.59%, ctx=548, majf=0, minf=1 00:09:27.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.754 issued rwts: total=33,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.754 job1: (groupid=0, jobs=1): err= 0: pid=3255666: Wed Nov 20 09:00:04 2024 00:09:27.754 read: IOPS=1659, BW=6637KiB/s (6797kB/s)(6644KiB/1001msec) 00:09:27.754 slat (nsec): min=5825, max=55225, avg=14103.59, stdev=5846.84 00:09:27.754 clat (usec): min=172, max=41098, avg=319.16, stdev=1725.31 00:09:27.754 lat (usec): min=178, max=41115, avg=333.27, stdev=1725.52 00:09:27.754 clat percentiles (usec): 00:09:27.754 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 221], 00:09:27.754 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:09:27.754 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 293], 00:09:27.754 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[41157], 99.95th=[41157], 00:09:27.754 | 99.99th=[41157] 00:09:27.754 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:27.754 slat (nsec): min=7826, max=62552, avg=17520.11, stdev=6590.75 00:09:27.754 clat (usec): min=125, max=748, avg=191.63, stdev=63.12 00:09:27.754 lat (usec): min=133, max=770, avg=209.15, stdev=62.48 00:09:27.754 clat percentiles (usec): 00:09:27.754 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:09:27.754 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 180], 00:09:27.754 | 70.00th=[ 188], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 355], 00:09:27.754 | 99.00th=[ 433], 99.50th=[ 474], 99.90th=[ 529], 99.95th=[ 537], 00:09:27.754 | 99.99th=[ 750] 00:09:27.754 bw ( KiB/s): min= 8192, max= 8192, per=50.60%, avg=8192.00, stdev= 0.00, samples=1 00:09:27.754 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:27.754 lat (usec) : 250=76.46%, 500=23.32%, 750=0.13% 00:09:27.754 lat (msec) : 50=0.08% 00:09:27.754 cpu : usr=5.10%, sys=7.10%, ctx=3710, majf=0, minf=1 00:09:27.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.754 issued rwts: total=1661,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.754 job2: (groupid=0, jobs=1): err= 0: pid=3255700: Wed Nov 20 09:00:04 2024 00:09:27.754 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4004KiB/1001msec) 00:09:27.754 slat (nsec): min=5918, max=60648, avg=16560.57, stdev=7445.18 00:09:27.754 clat (usec): min=203, max=41321, avg=741.26, stdev=4239.43 00:09:27.754 lat (usec): min=209, max=41338, avg=757.82, stdev=4239.72 00:09:27.754 clat percentiles (usec): 00:09:27.754 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 239], 20.00th=[ 249], 00:09:27.754 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:09:27.754 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 449], 95.00th=[ 537], 00:09:27.754 | 99.00th=[40109], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:27.754 | 99.99th=[41157] 00:09:27.754 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:27.754 slat (nsec): min=7888, max=70151, avg=15199.91, stdev=7391.46 00:09:27.754 clat (usec): min=150, max=261, avg=210.70, stdev=30.45 00:09:27.754 lat (usec): min=162, max=275, avg=225.90, stdev=25.99 00:09:27.754 clat percentiles (usec): 00:09:27.754 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 182], 00:09:27.754 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 239], 00:09:27.754 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:09:27.754 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 262], 99.95th=[ 262], 00:09:27.754 | 99.99th=[ 262] 00:09:27.755 bw ( KiB/s): min= 4096, max= 4096, per=25.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.755 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.755 lat (usec) : 250=59.95%, 500=35.46%, 750=4.05% 00:09:27.755 lat (msec) : 50=0.54% 00:09:27.755 cpu : usr=2.40%, sys=4.50%, ctx=2025, majf=0, minf=1 00:09:27.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.755 issued rwts: total=1001,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.755 job3: (groupid=0, jobs=1): err= 0: pid=3255712: Wed Nov 20 09:00:04 2024 00:09:27.755 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:09:27.755 slat (nsec): min=7447, max=36021, avg=28070.95, stdev=10066.25 00:09:27.755 clat (usec): min=9940, max=41042, avg=39520.59, stdev=6607.92 00:09:27.755 lat (usec): min=9955, max=41069, avg=39548.66, stdev=6610.84 00:09:27.755 clat percentiles (usec): 00:09:27.755 | 1.00th=[ 9896], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:27.755 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:27.755 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:27.755 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:27.755 | 99.99th=[41157] 00:09:27.755 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:27.755 slat (nsec): min=8429, max=29762, avg=9853.57, stdev=2022.29 00:09:27.755 clat (usec): min=161, max=495, avg=239.89, stdev=39.92 00:09:27.755 lat (usec): min=170, max=505, avg=249.75, stdev=39.87 00:09:27.755 clat percentiles (usec): 00:09:27.755 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 208], 00:09:27.755 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:09:27.755 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:09:27.755 | 99.00th=[ 396], 99.50th=[ 429], 99.90th=[ 494], 99.95th=[ 494], 00:09:27.755 | 99.99th=[ 494] 00:09:27.755 bw ( KiB/s): min= 4096, max= 4096, per=25.30%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.755 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.755 lat (usec) : 250=60.67%, 500=35.21% 00:09:27.755 lat (msec) : 10=0.19%, 50=3.93% 00:09:27.755 cpu : usr=0.70%, sys=0.30%, ctx=536, majf=0, minf=1 00:09:27.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.755 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.755 00:09:27.755 Run status group 0 (all jobs): 00:09:27.755 READ: bw=10.5MiB/s (11.0MB/s), 87.9KiB/s-6637KiB/s (90.0kB/s-6797kB/s), io=10.6MiB (11.1MB), run=1001-1012msec 00:09:27.755 WRITE: bw=15.8MiB/s (16.6MB/s), 2024KiB/s-8184KiB/s (2072kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1012msec 00:09:27.755 00:09:27.755 Disk stats (read/write): 00:09:27.755 nvme0n1: ios=51/512, merge=0/0, ticks=1549/87, in_queue=1636, util=85.07% 00:09:27.755 nvme0n2: ios=1488/1536, merge=0/0, ticks=884/307, in_queue=1191, util=88.90% 00:09:27.755 nvme0n3: ios=795/1024, merge=0/0, ticks=655/203, in_queue=858, util=94.33% 00:09:27.755 nvme0n4: ios=40/512, merge=0/0, ticks=1599/123, in_queue=1722, util=94.07% 00:09:27.755 09:00:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:27.755 [global] 00:09:27.755 thread=1 00:09:27.755 invalidate=1 00:09:27.755 rw=randwrite 00:09:27.755 time_based=1 00:09:27.755 runtime=1 00:09:27.755 ioengine=libaio 00:09:27.755 direct=1 00:09:27.755 bs=4096 00:09:27.755 iodepth=1 00:09:27.755 norandommap=0 00:09:27.755 numjobs=1 00:09:27.755 00:09:27.755 verify_dump=1 00:09:27.755 verify_backlog=512 00:09:27.755 verify_state_save=0 00:09:27.755 do_verify=1 00:09:27.755 verify=crc32c-intel 00:09:27.755 [job0] 00:09:27.755 filename=/dev/nvme0n1 00:09:27.755 [job1] 00:09:27.755 filename=/dev/nvme0n2 00:09:27.755 [job2] 00:09:27.755 filename=/dev/nvme0n3 00:09:27.755 [job3] 00:09:27.755 filename=/dev/nvme0n4 00:09:27.755 Could not set queue depth (nvme0n1) 00:09:27.755 Could not set queue depth (nvme0n2) 00:09:27.755 Could not set queue depth (nvme0n3) 00:09:27.755 Could not set queue depth (nvme0n4) 00:09:28.012 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.013 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.013 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.013 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:28.013 fio-3.35 00:09:28.013 Starting 4 threads 00:09:29.471 00:09:29.471 job0: (groupid=0, jobs=1): err= 0: pid=3255956: Wed Nov 20 09:00:06 2024 00:09:29.471 read: IOPS=70, BW=281KiB/s (288kB/s)(292KiB/1038msec) 00:09:29.471 slat (nsec): min=12631, max=53568, avg=28255.08, stdev=9745.81 00:09:29.471 clat (usec): min=282, max=41949, avg=12691.90, stdev=18788.98 00:09:29.471 lat (usec): min=301, max=41985, avg=12720.15, stdev=18788.40 00:09:29.471 clat percentiles (usec): 00:09:29.471 | 1.00th=[ 285], 5.00th=[ 367], 10.00th=[ 379], 20.00th=[ 404], 00:09:29.471 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 461], 60.00th=[ 494], 00:09:29.471 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:29.471 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:29.471 | 99.99th=[42206] 00:09:29.471 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:09:29.471 slat (nsec): min=7871, max=65293, avg=14682.29, stdev=7726.82 00:09:29.471 clat (usec): min=143, max=346, avg=186.36, stdev=26.96 00:09:29.471 lat (usec): min=152, max=372, avg=201.04, stdev=29.80 00:09:29.471 clat percentiles (usec): 00:09:29.471 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:09:29.471 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:09:29.471 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 223], 95.00th=[ 241], 00:09:29.471 | 99.00th=[ 262], 99.50th=[ 302], 99.90th=[ 347], 99.95th=[ 347], 00:09:29.471 | 99.99th=[ 347] 00:09:29.471 bw ( KiB/s): min= 4096, max= 4096, per=20.89%, avg=4096.00, stdev= 0.00, samples=1 00:09:29.471 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:29.471 lat (usec) : 250=85.81%, 500=9.23%, 750=1.20% 00:09:29.471 lat (msec) : 50=3.76% 00:09:29.471 cpu : usr=0.96%, sys=0.77%, ctx=586, majf=0, minf=1 00:09:29.471 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.471 issued rwts: total=73,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.471 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.471 job1: (groupid=0, jobs=1): err= 0: pid=3255957: Wed Nov 20 09:00:06 2024 00:09:29.471 read: IOPS=1592, BW=6370KiB/s (6523kB/s)(6376KiB/1001msec) 00:09:29.471 slat (nsec): min=5041, max=60872, avg=19845.18, stdev=9993.58 00:09:29.471 clat (usec): min=177, max=1942, avg=313.48, stdev=65.84 00:09:29.472 lat (usec): min=182, max=1956, avg=333.32, stdev=67.19 00:09:29.472 clat percentiles (usec): 00:09:29.472 | 1.00th=[ 204], 5.00th=[ 223], 10.00th=[ 239], 20.00th=[ 273], 00:09:29.472 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 326], 00:09:29.472 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 396], 00:09:29.472 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 523], 99.95th=[ 1942], 00:09:29.472 | 99.99th=[ 1942] 00:09:29.472 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:29.472 slat (nsec): min=5914, max=42676, avg=12702.27, stdev=4184.94 00:09:29.472 clat (usec): min=127, max=1872, avg=207.92, stdev=58.52 00:09:29.472 lat (usec): min=135, max=1887, avg=220.63, stdev=58.41 00:09:29.472 clat percentiles (usec): 00:09:29.472 | 1.00th=[ 135], 5.00th=[ 151], 10.00th=[ 176], 20.00th=[ 184], 00:09:29.472 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:09:29.472 | 70.00th=[ 208], 80.00th=[ 229], 90.00th=[ 260], 95.00th=[ 293], 00:09:29.472 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 441], 99.95th=[ 469], 00:09:29.472 | 99.99th=[ 1876] 00:09:29.472 bw ( KiB/s): min= 8192, max= 8192, per=41.77%, avg=8192.00, stdev= 0.00, samples=1 00:09:29.472 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:29.472 lat (usec) : 250=54.23%, 500=45.61%, 750=0.11% 00:09:29.472 lat (msec) : 2=0.05% 00:09:29.472 cpu : usr=3.60%, sys=5.70%, ctx=3642, majf=0, minf=1 00:09:29.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.472 issued rwts: total=1594,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.472 job2: (groupid=0, jobs=1): err= 0: pid=3255958: Wed Nov 20 09:00:06 2024 00:09:29.472 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:29.472 slat (nsec): min=5422, max=77807, avg=20639.16, stdev=10177.60 00:09:29.472 clat (usec): min=201, max=522, avg=319.81, stdev=49.82 00:09:29.472 lat (usec): min=209, max=556, avg=340.45, stdev=52.85 00:09:29.472 clat percentiles (usec): 00:09:29.472 | 1.00th=[ 219], 5.00th=[ 241], 10.00th=[ 265], 20.00th=[ 281], 00:09:29.472 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 326], 00:09:29.472 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 392], 95.00th=[ 416], 00:09:29.472 | 99.00th=[ 449], 99.50th=[ 478], 99.90th=[ 523], 99.95th=[ 523], 00:09:29.472 | 99.99th=[ 523] 00:09:29.472 write: IOPS=2014, BW=8060KiB/s (8253kB/s)(8068KiB/1001msec); 0 zone resets 00:09:29.472 slat (nsec): min=6227, max=57250, avg=15074.00, stdev=6013.91 00:09:29.472 clat (usec): min=138, max=519, avg=212.82, stdev=50.56 00:09:29.472 lat (usec): min=146, max=560, avg=227.89, stdev=51.31 00:09:29.472 clat percentiles (usec): 00:09:29.472 | 1.00th=[ 153], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:09:29.472 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 202], 00:09:29.472 | 70.00th=[ 215], 80.00th=[ 239], 90.00th=[ 265], 95.00th=[ 326], 00:09:29.472 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[ 457], 99.95th=[ 461], 00:09:29.472 | 99.99th=[ 519] 00:09:29.472 bw ( KiB/s): min= 8192, max= 8192, per=41.77%, avg=8192.00, stdev= 0.00, samples=1 00:09:29.472 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:29.472 lat (usec) : 250=50.41%, 500=49.48%, 750=0.11% 00:09:29.472 cpu : usr=3.70%, sys=6.10%, ctx=3553, majf=0, minf=1 00:09:29.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.472 issued rwts: total=1536,2017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.472 job3: (groupid=0, jobs=1): err= 0: pid=3255959: Wed Nov 20 09:00:06 2024 00:09:29.472 read: IOPS=462, BW=1849KiB/s (1893kB/s)(1884KiB/1019msec) 00:09:29.472 slat (nsec): min=7260, max=39850, avg=18172.96, stdev=4056.83 00:09:29.472 clat (usec): min=230, max=41025, avg=1832.04, stdev=7811.03 00:09:29.472 lat (usec): min=238, max=41042, avg=1850.21, stdev=7811.80 00:09:29.472 clat percentiles (usec): 00:09:29.472 | 1.00th=[ 235], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:09:29.472 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 277], 00:09:29.472 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 351], 00:09:29.472 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:29.472 | 99.99th=[41157] 00:09:29.472 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:09:29.472 slat (nsec): min=6962, max=52119, avg=17412.50, stdev=7151.82 00:09:29.472 clat (usec): min=157, max=572, avg=253.53, stdev=65.52 00:09:29.472 lat (usec): min=166, max=581, avg=270.95, stdev=65.03 00:09:29.472 clat percentiles (usec): 00:09:29.472 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 208], 00:09:29.472 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 251], 00:09:29.472 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 359], 95.00th=[ 396], 00:09:29.472 | 99.00th=[ 478], 99.50th=[ 486], 99.90th=[ 570], 99.95th=[ 570], 00:09:29.472 | 99.99th=[ 570] 00:09:29.472 bw ( KiB/s): min= 4096, max= 4096, per=20.89%, avg=4096.00, stdev= 0.00, samples=1 00:09:29.472 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:29.472 lat (usec) : 250=33.06%, 500=64.90%, 750=0.20% 00:09:29.472 lat (msec) : 50=1.83% 00:09:29.472 cpu : usr=1.38%, sys=1.96%, ctx=986, majf=0, minf=1 00:09:29.472 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:29.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.472 issued rwts: total=471,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.472 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:29.472 00:09:29.472 Run status group 0 (all jobs): 00:09:29.472 READ: bw=13.8MiB/s (14.5MB/s), 281KiB/s-6370KiB/s (288kB/s-6523kB/s), io=14.4MiB (15.0MB), run=1001-1038msec 00:09:29.472 WRITE: bw=19.2MiB/s (20.1MB/s), 1973KiB/s-8184KiB/s (2020kB/s-8380kB/s), io=19.9MiB (20.8MB), run=1001-1038msec 00:09:29.472 00:09:29.472 Disk stats (read/write): 00:09:29.472 nvme0n1: ios=72/512, merge=0/0, ticks=1014/95, in_queue=1109, util=97.70% 00:09:29.472 nvme0n2: ios=1491/1536, merge=0/0, ticks=531/325, in_queue=856, util=91.06% 00:09:29.472 nvme0n3: ios=1429/1536, merge=0/0, ticks=429/322, in_queue=751, util=89.04% 00:09:29.472 nvme0n4: ios=523/512, merge=0/0, ticks=1065/125, in_queue=1190, util=97.69% 00:09:29.472 09:00:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:29.472 [global] 00:09:29.472 thread=1 00:09:29.472 invalidate=1 00:09:29.472 rw=write 00:09:29.472 time_based=1 00:09:29.472 runtime=1 00:09:29.472 ioengine=libaio 00:09:29.472 direct=1 00:09:29.472 bs=4096 00:09:29.472 iodepth=128 00:09:29.472 norandommap=0 00:09:29.472 numjobs=1 00:09:29.472 00:09:29.472 verify_dump=1 00:09:29.472 verify_backlog=512 00:09:29.472 verify_state_save=0 00:09:29.472 do_verify=1 00:09:29.472 verify=crc32c-intel 00:09:29.472 [job0] 00:09:29.472 filename=/dev/nvme0n1 00:09:29.472 [job1] 00:09:29.472 filename=/dev/nvme0n2 00:09:29.472 [job2] 00:09:29.472 filename=/dev/nvme0n3 00:09:29.472 [job3] 00:09:29.472 filename=/dev/nvme0n4 00:09:29.472 Could not set queue depth (nvme0n1) 00:09:29.472 Could not set queue depth (nvme0n2) 00:09:29.472 Could not set queue depth (nvme0n3) 00:09:29.472 Could not set queue depth (nvme0n4) 00:09:29.472 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.472 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.472 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.472 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.472 fio-3.35 00:09:29.472 Starting 4 threads 00:09:30.846 00:09:30.846 job0: (groupid=0, jobs=1): err= 0: pid=3256187: Wed Nov 20 09:00:07 2024 00:09:30.846 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:30.846 slat (usec): min=2, max=7648, avg=94.76, stdev=595.27 00:09:30.846 clat (usec): min=3942, max=47567, avg=12805.82, stdev=4025.98 00:09:30.846 lat (usec): min=6132, max=47606, avg=12900.58, stdev=4069.45 00:09:30.846 clat percentiles (usec): 00:09:30.846 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10552], 00:09:30.846 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:09:30.846 | 70.00th=[12780], 80.00th=[14484], 90.00th=[20055], 95.00th=[21365], 00:09:30.846 | 99.00th=[25297], 99.50th=[30540], 99.90th=[31327], 99.95th=[31327], 00:09:30.846 | 99.99th=[47449] 00:09:30.847 write: IOPS=4859, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1004msec); 0 zone resets 00:09:30.847 slat (usec): min=3, max=20392, avg=98.59, stdev=779.39 00:09:30.847 clat (usec): min=2453, max=48790, avg=13992.21, stdev=6593.73 00:09:30.847 lat (usec): min=2461, max=48809, avg=14090.80, stdev=6651.19 00:09:30.847 clat percentiles (usec): 00:09:30.847 | 1.00th=[ 3785], 5.00th=[ 6652], 10.00th=[ 9372], 20.00th=[10552], 00:09:30.847 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:09:30.847 | 70.00th=[13698], 80.00th=[18744], 90.00th=[22414], 95.00th=[29492], 00:09:30.847 | 99.00th=[40633], 99.50th=[41157], 99.90th=[43254], 99.95th=[43779], 00:09:30.847 | 99.99th=[49021] 00:09:30.847 bw ( KiB/s): min=18160, max=19856, per=28.65%, avg=19008.00, stdev=1199.25, samples=2 00:09:30.847 iops : min= 4540, max= 4964, avg=4752.00, stdev=299.81, samples=2 00:09:30.847 lat (msec) : 4=0.62%, 10=12.73%, 20=73.62%, 50=13.03% 00:09:30.847 cpu : usr=5.08%, sys=9.67%, ctx=413, majf=0, minf=1 00:09:30.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:30.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.847 issued rwts: total=4608,4879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.847 job1: (groupid=0, jobs=1): err= 0: pid=3256188: Wed Nov 20 09:00:07 2024 00:09:30.847 read: IOPS=4201, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1003msec) 00:09:30.847 slat (usec): min=2, max=16391, avg=119.50, stdev=772.75 00:09:30.847 clat (usec): min=2105, max=43670, avg=14967.23, stdev=5632.50 00:09:30.847 lat (usec): min=5036, max=43710, avg=15086.73, stdev=5707.61 00:09:30.847 clat percentiles (usec): 00:09:30.847 | 1.00th=[ 5342], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11338], 00:09:30.847 | 30.00th=[11600], 40.00th=[11731], 50.00th=[12256], 60.00th=[13960], 00:09:30.847 | 70.00th=[15401], 80.00th=[20317], 90.00th=[22152], 95.00th=[27657], 00:09:30.847 | 99.00th=[32375], 99.50th=[32375], 99.90th=[36963], 99.95th=[38011], 00:09:30.847 | 99.99th=[43779] 00:09:30.847 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:30.847 slat (usec): min=3, max=9301, avg=97.73, stdev=483.94 00:09:30.847 clat (usec): min=7046, max=39103, avg=13826.91, stdev=4078.94 00:09:30.847 lat (usec): min=7073, max=39139, avg=13924.65, stdev=4107.42 00:09:30.847 clat percentiles (usec): 00:09:30.847 | 1.00th=[ 8291], 5.00th=[10421], 10.00th=[10945], 20.00th=[11600], 00:09:30.847 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:09:30.847 | 70.00th=[13698], 80.00th=[16581], 90.00th=[19268], 95.00th=[21627], 00:09:30.847 | 99.00th=[32113], 99.50th=[32900], 99.90th=[36963], 99.95th=[36963], 00:09:30.847 | 99.99th=[39060] 00:09:30.847 bw ( KiB/s): min=14648, max=22144, per=27.72%, avg=18396.00, stdev=5300.47, samples=2 00:09:30.847 iops : min= 3662, max= 5536, avg=4599.00, stdev=1325.12, samples=2 00:09:30.847 lat (msec) : 4=0.01%, 10=5.16%, 20=80.68%, 50=14.15% 00:09:30.847 cpu : usr=6.09%, sys=8.68%, ctx=520, majf=0, minf=1 00:09:30.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:30.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.847 issued rwts: total=4214,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.847 job2: (groupid=0, jobs=1): err= 0: pid=3256189: Wed Nov 20 09:00:07 2024 00:09:30.847 read: IOPS=3311, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:09:30.847 slat (usec): min=3, max=18867, avg=141.64, stdev=1074.04 00:09:30.847 clat (usec): min=2403, max=61375, avg=18795.54, stdev=7480.13 00:09:30.847 lat (usec): min=5220, max=75084, avg=18937.18, stdev=7591.64 00:09:30.847 clat percentiles (usec): 00:09:30.847 | 1.00th=[ 5276], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11994], 00:09:30.847 | 30.00th=[13829], 40.00th=[15401], 50.00th=[17171], 60.00th=[19006], 00:09:30.847 | 70.00th=[22414], 80.00th=[27395], 90.00th=[29230], 95.00th=[30540], 00:09:30.847 | 99.00th=[38011], 99.50th=[40633], 99.90th=[60031], 99.95th=[61604], 00:09:30.847 | 99.99th=[61604] 00:09:30.847 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:30.847 slat (usec): min=3, max=18184, avg=130.85, stdev=860.50 00:09:30.847 clat (usec): min=3704, max=62159, avg=18071.21, stdev=10926.50 00:09:30.847 lat (usec): min=3724, max=62187, avg=18202.06, stdev=11024.03 00:09:30.847 clat percentiles (usec): 00:09:30.847 | 1.00th=[ 4817], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11863], 00:09:30.847 | 30.00th=[13173], 40.00th=[14353], 50.00th=[14746], 60.00th=[15008], 00:09:30.847 | 70.00th=[18220], 80.00th=[22414], 90.00th=[26346], 95.00th=[49546], 00:09:30.847 | 99.00th=[60031], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:09:30.847 | 99.99th=[62129] 00:09:30.847 bw ( KiB/s): min=12288, max=16384, per=21.61%, avg=14336.00, stdev=2896.31, samples=2 00:09:30.847 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:30.847 lat (msec) : 4=0.14%, 10=6.85%, 20=64.08%, 50=26.28%, 100=2.65% 00:09:30.847 cpu : usr=4.59%, sys=6.68%, ctx=247, majf=0, minf=1 00:09:30.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:30.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.847 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.847 job3: (groupid=0, jobs=1): err= 0: pid=3256190: Wed Nov 20 09:00:07 2024 00:09:30.847 read: IOPS=3189, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1003msec) 00:09:30.847 slat (usec): min=3, max=10119, avg=134.86, stdev=826.85 00:09:30.847 clat (usec): min=1250, max=39936, avg=17451.68, stdev=4765.42 00:09:30.847 lat (usec): min=3589, max=39958, avg=17586.54, stdev=4834.23 00:09:30.847 clat percentiles (usec): 00:09:30.847 | 1.00th=[ 6652], 5.00th=[11207], 10.00th=[12256], 20.00th=[13960], 00:09:30.847 | 30.00th=[15008], 40.00th=[15926], 50.00th=[16319], 60.00th=[17957], 00:09:30.847 | 70.00th=[19530], 80.00th=[21627], 90.00th=[23725], 95.00th=[25560], 00:09:30.847 | 99.00th=[32900], 99.50th=[33817], 99.90th=[40109], 99.95th=[40109], 00:09:30.847 | 99.99th=[40109] 00:09:30.847 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:09:30.847 slat (usec): min=3, max=13463, avg=138.10, stdev=841.36 00:09:30.847 clat (usec): min=1122, max=64843, avg=19925.49, stdev=15614.86 00:09:30.847 lat (usec): min=1158, max=64853, avg=20063.59, stdev=15719.13 00:09:30.847 clat percentiles (usec): 00:09:30.847 | 1.00th=[ 2671], 5.00th=[ 5604], 10.00th=[ 8160], 20.00th=[10814], 00:09:30.847 | 30.00th=[11863], 40.00th=[13304], 50.00th=[13698], 60.00th=[14746], 00:09:30.847 | 70.00th=[15139], 80.00th=[27132], 90.00th=[50594], 95.00th=[55313], 00:09:30.847 | 99.00th=[60031], 99.50th=[63177], 99.90th=[63701], 99.95th=[63701], 00:09:30.847 | 99.99th=[64750] 00:09:30.847 bw ( KiB/s): min=12288, max=16384, per=21.61%, avg=14336.00, stdev=2896.31, samples=2 00:09:30.847 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:30.847 lat (msec) : 2=0.40%, 4=0.72%, 10=8.40%, 20=65.33%, 50=19.68% 00:09:30.847 lat (msec) : 100=5.47% 00:09:30.847 cpu : usr=5.09%, sys=6.99%, ctx=249, majf=0, minf=1 00:09:30.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:30.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.847 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.847 00:09:30.847 Run status group 0 (all jobs): 00:09:30.847 READ: bw=59.7MiB/s (62.6MB/s), 12.5MiB/s-17.9MiB/s (13.1MB/s-18.8MB/s), io=59.9MiB (62.9MB), run=1003-1004msec 00:09:30.847 WRITE: bw=64.8MiB/s (67.9MB/s), 13.9MiB/s-19.0MiB/s (14.6MB/s-19.9MB/s), io=65.1MiB (68.2MB), run=1003-1004msec 00:09:30.847 00:09:30.847 Disk stats (read/write): 00:09:30.847 nvme0n1: ios=4120/4183, merge=0/0, ticks=27452/34199, in_queue=61651, util=98.00% 00:09:30.847 nvme0n2: ios=3600/3629, merge=0/0, ticks=24620/20982, in_queue=45602, util=87.31% 00:09:30.847 nvme0n3: ios=3130/3135, merge=0/0, ticks=33634/31474, in_queue=65108, util=98.12% 00:09:30.847 nvme0n4: ios=2603/2854, merge=0/0, ticks=30795/40146, in_queue=70941, util=94.85% 00:09:30.847 09:00:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:30.847 [global] 00:09:30.847 thread=1 00:09:30.847 invalidate=1 00:09:30.847 rw=randwrite 00:09:30.847 time_based=1 00:09:30.847 runtime=1 00:09:30.847 ioengine=libaio 00:09:30.847 direct=1 00:09:30.847 bs=4096 00:09:30.847 iodepth=128 00:09:30.847 norandommap=0 00:09:30.847 numjobs=1 00:09:30.847 00:09:30.847 verify_dump=1 00:09:30.847 verify_backlog=512 00:09:30.847 verify_state_save=0 00:09:30.847 do_verify=1 00:09:30.847 verify=crc32c-intel 00:09:30.847 [job0] 00:09:30.847 filename=/dev/nvme0n1 00:09:30.847 [job1] 00:09:30.847 filename=/dev/nvme0n2 00:09:30.847 [job2] 00:09:30.847 filename=/dev/nvme0n3 00:09:30.847 [job3] 00:09:30.847 filename=/dev/nvme0n4 00:09:30.847 Could not set queue depth (nvme0n1) 00:09:30.847 Could not set queue depth (nvme0n2) 00:09:30.847 Could not set queue depth (nvme0n3) 00:09:30.847 Could not set queue depth (nvme0n4) 00:09:30.847 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.847 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.847 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.847 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:30.847 fio-3.35 00:09:30.847 Starting 4 threads 00:09:32.222 00:09:32.222 job0: (groupid=0, jobs=1): err= 0: pid=3256481: Wed Nov 20 09:00:08 2024 00:09:32.222 read: IOPS=3916, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1003msec) 00:09:32.222 slat (usec): min=3, max=13869, avg=130.48, stdev=840.13 00:09:32.222 clat (usec): min=598, max=52319, avg=15995.65, stdev=8150.06 00:09:32.222 lat (usec): min=3638, max=52336, avg=16126.13, stdev=8231.92 00:09:32.222 clat percentiles (usec): 00:09:32.222 | 1.00th=[ 4178], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[10945], 00:09:32.222 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12649], 60.00th=[13304], 00:09:32.222 | 70.00th=[15926], 80.00th=[20841], 90.00th=[30278], 95.00th=[35390], 00:09:32.222 | 99.00th=[39584], 99.50th=[42730], 99.90th=[45876], 99.95th=[51643], 00:09:32.222 | 99.99th=[52167] 00:09:32.222 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:32.222 slat (usec): min=3, max=13547, avg=111.27, stdev=639.52 00:09:32.222 clat (usec): min=7482, max=47474, avg=15526.56, stdev=6056.42 00:09:32.222 lat (usec): min=7489, max=47491, avg=15637.84, stdev=6109.36 00:09:32.222 clat percentiles (usec): 00:09:32.222 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11076], 00:09:32.222 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12780], 60.00th=[15401], 00:09:32.222 | 70.00th=[19006], 80.00th=[20055], 90.00th=[21103], 95.00th=[30802], 00:09:32.222 | 99.00th=[34866], 99.50th=[34866], 99.90th=[39584], 99.95th=[39584], 00:09:32.222 | 99.99th=[47449] 00:09:32.222 bw ( KiB/s): min=16384, max=16384, per=26.80%, avg=16384.00, stdev= 0.00, samples=2 00:09:32.222 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:32.222 lat (usec) : 750=0.01% 00:09:32.222 lat (msec) : 4=0.37%, 10=10.13%, 20=69.20%, 50=20.25%, 100=0.02% 00:09:32.222 cpu : usr=4.19%, sys=6.79%, ctx=353, majf=0, minf=1 00:09:32.222 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:32.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.222 issued rwts: total=3928,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.222 job1: (groupid=0, jobs=1): err= 0: pid=3256495: Wed Nov 20 09:00:08 2024 00:09:32.222 read: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1005msec) 00:09:32.222 slat (usec): min=3, max=13750, avg=182.99, stdev=985.40 00:09:32.222 clat (usec): min=752, max=53837, avg=22550.00, stdev=10349.92 00:09:32.222 lat (usec): min=4529, max=53846, avg=22732.99, stdev=10402.31 00:09:32.222 clat percentiles (usec): 00:09:32.222 | 1.00th=[ 5473], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11994], 00:09:32.222 | 30.00th=[14877], 40.00th=[18220], 50.00th=[21627], 60.00th=[24511], 00:09:32.222 | 70.00th=[28181], 80.00th=[29754], 90.00th=[36439], 95.00th=[41157], 00:09:32.222 | 99.00th=[51643], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:09:32.222 | 99.99th=[53740] 00:09:32.222 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:32.222 slat (usec): min=3, max=11428, avg=156.21, stdev=871.91 00:09:32.222 clat (usec): min=1632, max=53738, avg=21246.13, stdev=8834.61 00:09:32.222 lat (usec): min=1642, max=53748, avg=21402.34, stdev=8868.27 00:09:32.222 clat percentiles (usec): 00:09:32.222 | 1.00th=[ 2671], 5.00th=[11207], 10.00th=[11600], 20.00th=[15533], 00:09:32.222 | 30.00th=[16188], 40.00th=[17433], 50.00th=[19006], 60.00th=[22152], 00:09:32.222 | 70.00th=[25297], 80.00th=[27132], 90.00th=[32375], 95.00th=[40633], 00:09:32.222 | 99.00th=[48497], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:09:32.222 | 99.99th=[53740] 00:09:32.222 bw ( KiB/s): min=12240, max=12336, per=20.10%, avg=12288.00, stdev=67.88, samples=2 00:09:32.222 iops : min= 3060, max= 3084, avg=3072.00, stdev=16.97, samples=2 00:09:32.222 lat (usec) : 1000=0.02% 00:09:32.222 lat (msec) : 2=0.09%, 4=0.45%, 10=5.30%, 20=43.67%, 50=49.16% 00:09:32.222 lat (msec) : 100=1.31% 00:09:32.222 cpu : usr=2.29%, sys=3.88%, ctx=240, majf=0, minf=1 00:09:32.222 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:32.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.222 issued rwts: total=2721,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.222 job2: (groupid=0, jobs=1): err= 0: pid=3256523: Wed Nov 20 09:00:08 2024 00:09:32.222 read: IOPS=3392, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1005msec) 00:09:32.222 slat (usec): min=3, max=10135, avg=137.30, stdev=747.75 00:09:32.222 clat (usec): min=1860, max=40523, avg=17283.48, stdev=4945.65 00:09:32.222 lat (usec): min=5399, max=40538, avg=17420.78, stdev=5008.07 00:09:32.222 clat percentiles (usec): 00:09:32.222 | 1.00th=[ 8848], 5.00th=[11600], 10.00th=[12780], 20.00th=[13304], 00:09:32.222 | 30.00th=[14091], 40.00th=[14615], 50.00th=[16450], 60.00th=[17433], 00:09:32.222 | 70.00th=[18744], 80.00th=[20841], 90.00th=[23200], 95.00th=[27395], 00:09:32.222 | 99.00th=[32375], 99.50th=[33817], 99.90th=[37487], 99.95th=[38011], 00:09:32.222 | 99.99th=[40633] 00:09:32.222 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:09:32.222 slat (usec): min=4, max=9759, avg=140.29, stdev=652.51 00:09:32.222 clat (usec): min=8997, max=35241, avg=18898.43, stdev=6708.28 00:09:32.222 lat (usec): min=9014, max=35260, avg=19038.72, stdev=6758.34 00:09:32.222 clat percentiles (usec): 00:09:32.222 | 1.00th=[ 9765], 5.00th=[12256], 10.00th=[12387], 20.00th=[12911], 00:09:32.222 | 30.00th=[13435], 40.00th=[15139], 50.00th=[17171], 60.00th=[19530], 00:09:32.222 | 70.00th=[21103], 80.00th=[24249], 90.00th=[30278], 95.00th=[32375], 00:09:32.222 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:09:32.222 | 99.99th=[35390] 00:09:32.222 bw ( KiB/s): min=12288, max=16384, per=23.45%, avg=14336.00, stdev=2896.31, samples=2 00:09:32.222 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:32.222 lat (msec) : 2=0.01%, 10=1.40%, 20=65.95%, 50=32.63% 00:09:32.222 cpu : usr=3.49%, sys=6.77%, ctx=388, majf=0, minf=1 00:09:32.222 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:32.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.222 issued rwts: total=3409,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.222 job3: (groupid=0, jobs=1): err= 0: pid=3256532: Wed Nov 20 09:00:08 2024 00:09:32.222 read: IOPS=4116, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1003msec) 00:09:32.222 slat (usec): min=2, max=11242, avg=117.48, stdev=697.38 00:09:32.222 clat (usec): min=782, max=33000, avg=15015.38, stdev=2866.66 00:09:32.222 lat (usec): min=2914, max=33012, avg=15132.86, stdev=2887.38 00:09:32.222 clat percentiles (usec): 00:09:32.222 | 1.00th=[10421], 5.00th=[11731], 10.00th=[12518], 20.00th=[13173], 00:09:32.222 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14353], 60.00th=[14615], 00:09:32.222 | 70.00th=[15533], 80.00th=[16909], 90.00th=[19530], 95.00th=[21627], 00:09:32.222 | 99.00th=[23200], 99.50th=[23200], 99.90th=[27395], 99.95th=[30278], 00:09:32.222 | 99.99th=[32900] 00:09:32.222 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:32.222 slat (usec): min=3, max=7543, avg=104.52, stdev=520.81 00:09:32.222 clat (usec): min=6111, max=35878, avg=14031.92, stdev=3098.31 00:09:32.222 lat (usec): min=6119, max=35882, avg=14136.44, stdev=3100.22 00:09:32.222 clat percentiles (usec): 00:09:32.222 | 1.00th=[ 6849], 5.00th=[10159], 10.00th=[11338], 20.00th=[12518], 00:09:32.222 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13566], 00:09:32.222 | 70.00th=[14222], 80.00th=[15401], 90.00th=[19530], 95.00th=[20579], 00:09:32.222 | 99.00th=[22676], 99.50th=[28967], 99.90th=[35914], 99.95th=[35914], 00:09:32.222 | 99.99th=[35914] 00:09:32.222 bw ( KiB/s): min=16032, max=20072, per=29.53%, avg=18052.00, stdev=2856.71, samples=2 00:09:32.222 iops : min= 4008, max= 5018, avg=4513.00, stdev=714.18, samples=2 00:09:32.222 lat (usec) : 1000=0.01% 00:09:32.222 lat (msec) : 4=0.26%, 10=1.41%, 20=88.76%, 50=9.56% 00:09:32.222 cpu : usr=4.09%, sys=7.19%, ctx=386, majf=0, minf=2 00:09:32.222 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:32.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:32.222 issued rwts: total=4129,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.222 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:32.222 00:09:32.222 Run status group 0 (all jobs): 00:09:32.222 READ: bw=55.1MiB/s (57.8MB/s), 10.6MiB/s-16.1MiB/s (11.1MB/s-16.9MB/s), io=55.4MiB (58.1MB), run=1003-1005msec 00:09:32.222 WRITE: bw=59.7MiB/s (62.6MB/s), 11.9MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=60.0MiB (62.9MB), run=1003-1005msec 00:09:32.222 00:09:32.222 Disk stats (read/write): 00:09:32.222 nvme0n1: ios=3122/3232, merge=0/0, ticks=18386/16081, in_queue=34467, util=85.47% 00:09:32.222 nvme0n2: ios=2560/2560, merge=0/0, ticks=24345/26322, in_queue=50667, util=89.43% 00:09:32.222 nvme0n3: ios=3058/3072, merge=0/0, ticks=16876/18189, in_queue=35065, util=93.42% 00:09:32.222 nvme0n4: ios=3637/3609, merge=0/0, ticks=19806/15899, in_queue=35705, util=94.53% 00:09:32.222 09:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:32.222 09:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3256933 00:09:32.223 09:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:32.223 09:00:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:32.223 [global] 00:09:32.223 thread=1 00:09:32.223 invalidate=1 00:09:32.223 rw=read 00:09:32.223 time_based=1 00:09:32.223 runtime=10 00:09:32.223 ioengine=libaio 00:09:32.223 direct=1 00:09:32.223 bs=4096 00:09:32.223 iodepth=1 00:09:32.223 norandommap=1 00:09:32.223 numjobs=1 00:09:32.223 00:09:32.223 [job0] 00:09:32.223 filename=/dev/nvme0n1 00:09:32.223 [job1] 00:09:32.223 filename=/dev/nvme0n2 00:09:32.223 [job2] 00:09:32.223 filename=/dev/nvme0n3 00:09:32.223 [job3] 00:09:32.223 filename=/dev/nvme0n4 00:09:32.223 Could not set queue depth (nvme0n1) 00:09:32.223 Could not set queue depth (nvme0n2) 00:09:32.223 Could not set queue depth (nvme0n3) 00:09:32.223 Could not set queue depth (nvme0n4) 00:09:32.223 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.223 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.223 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.223 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.223 fio-3.35 00:09:32.223 Starting 4 threads 00:09:35.502 09:00:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:35.502 09:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:35.502 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11800576, buflen=4096 00:09:35.502 fio: pid=3257189, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:35.760 09:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.760 09:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:35.760 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45387776, buflen=4096 00:09:35.760 fio: pid=3257188, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:36.018 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47861760, buflen=4096 00:09:36.018 fio: pid=3257186, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:36.018 09:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.018 09:00:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:36.276 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=43413504, buflen=4096 00:09:36.276 fio: pid=3257187, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:36.276 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.276 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:36.276 00:09:36.276 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3257186: Wed Nov 20 09:00:13 2024 00:09:36.276 read: IOPS=3303, BW=12.9MiB/s (13.5MB/s)(45.6MiB/3537msec) 00:09:36.276 slat (usec): min=5, max=11860, avg=10.91, stdev=199.95 00:09:36.276 clat (usec): min=171, max=41258, avg=287.87, stdev=1129.91 00:09:36.276 lat (usec): min=177, max=41265, avg=298.78, stdev=1147.47 00:09:36.276 clat percentiles (usec): 00:09:36.276 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 206], 00:09:36.276 | 30.00th=[ 219], 40.00th=[ 239], 50.00th=[ 260], 60.00th=[ 273], 00:09:36.276 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 351], 00:09:36.276 | 99.00th=[ 453], 99.50th=[ 498], 99.90th=[ 1254], 99.95th=[40633], 00:09:36.276 | 99.99th=[41157] 00:09:36.276 bw ( KiB/s): min=10352, max=17112, per=35.57%, avg=13473.33, stdev=2713.92, samples=6 00:09:36.276 iops : min= 2588, max= 4278, avg=3368.33, stdev=678.48, samples=6 00:09:36.276 lat (usec) : 250=45.20%, 500=54.31%, 750=0.33%, 1000=0.03% 00:09:36.276 lat (msec) : 2=0.03%, 10=0.01%, 50=0.08% 00:09:36.276 cpu : usr=1.07%, sys=4.27%, ctx=11691, majf=0, minf=1 00:09:36.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.276 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.276 issued rwts: total=11686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.276 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3257187: Wed Nov 20 09:00:13 2024 00:09:36.276 read: IOPS=2769, BW=10.8MiB/s (11.3MB/s)(41.4MiB/3828msec) 00:09:36.276 slat (usec): min=4, max=28873, avg=16.35, stdev=326.01 00:09:36.276 clat (usec): min=174, max=41472, avg=339.57, stdev=1680.77 00:09:36.276 lat (usec): min=180, max=41492, avg=355.92, stdev=1712.46 00:09:36.276 clat percentiles (usec): 00:09:36.276 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 210], 20.00th=[ 231], 00:09:36.276 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 269], 00:09:36.276 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 326], 95.00th=[ 388], 00:09:36.276 | 99.00th=[ 545], 99.50th=[ 619], 99.90th=[41157], 99.95th=[41157], 00:09:36.276 | 99.99th=[41157] 00:09:36.276 bw ( KiB/s): min= 704, max=13768, per=28.18%, avg=10674.86, stdev=4853.41, samples=7 00:09:36.276 iops : min= 176, max= 3442, avg=2668.71, stdev=1213.35, samples=7 00:09:36.276 lat (usec) : 250=35.05%, 500=63.21%, 750=1.54%, 1000=0.02% 00:09:36.276 lat (msec) : 20=0.01%, 50=0.17% 00:09:36.276 cpu : usr=1.70%, sys=4.21%, ctx=10609, majf=0, minf=2 00:09:36.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.276 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.276 issued rwts: total=10600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.276 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3257188: Wed Nov 20 09:00:13 2024 00:09:36.276 read: IOPS=3407, BW=13.3MiB/s (14.0MB/s)(43.3MiB/3252msec) 00:09:36.276 slat (usec): min=4, max=13731, avg=12.63, stdev=171.58 00:09:36.276 clat (usec): min=175, max=41132, avg=276.27, stdev=1277.02 00:09:36.276 lat (usec): min=181, max=41143, avg=288.90, stdev=1288.47 00:09:36.276 clat percentiles (usec): 00:09:36.276 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 202], 00:09:36.276 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:09:36.276 | 70.00th=[ 229], 80.00th=[ 262], 90.00th=[ 326], 95.00th=[ 359], 00:09:36.276 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 1172], 99.95th=[40633], 00:09:36.276 | 99.99th=[41157] 00:09:36.276 bw ( KiB/s): min= 6392, max=17752, per=36.84%, avg=13953.33, stdev=4461.15, samples=6 00:09:36.276 iops : min= 1598, max= 4438, avg=3488.33, stdev=1115.29, samples=6 00:09:36.276 lat (usec) : 250=78.25%, 500=21.37%, 750=0.25%, 1000=0.01% 00:09:36.276 lat (msec) : 2=0.01%, 50=0.10% 00:09:36.276 cpu : usr=1.75%, sys=4.12%, ctx=11084, majf=0, minf=2 00:09:36.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.277 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.277 issued rwts: total=11082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.277 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3257189: Wed Nov 20 09:00:13 2024 00:09:36.277 read: IOPS=966, BW=3866KiB/s (3959kB/s)(11.3MiB/2981msec) 00:09:36.277 slat (nsec): min=5462, max=53936, avg=9477.03, stdev=5630.10 00:09:36.277 clat (usec): min=188, max=43469, avg=1014.58, stdev=5406.90 00:09:36.277 lat (usec): min=194, max=43490, avg=1024.05, stdev=5407.98 00:09:36.277 clat percentiles (usec): 00:09:36.277 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:09:36.277 | 30.00th=[ 227], 40.00th=[ 243], 50.00th=[ 281], 60.00th=[ 293], 00:09:36.277 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 351], 95.00th=[ 420], 00:09:36.277 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:09:36.277 | 99.99th=[43254] 00:09:36.277 bw ( KiB/s): min= 408, max=14112, per=11.70%, avg=4430.40, stdev=5960.00, samples=5 00:09:36.277 iops : min= 102, max= 3528, avg=1107.60, stdev=1490.00, samples=5 00:09:36.277 lat (usec) : 250=42.78%, 500=53.68%, 750=1.49%, 1000=0.07% 00:09:36.277 lat (msec) : 4=0.07%, 10=0.07%, 50=1.80% 00:09:36.277 cpu : usr=0.47%, sys=1.54%, ctx=2882, majf=0, minf=1 00:09:36.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:36.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.277 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.277 issued rwts: total=2882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:36.277 00:09:36.277 Run status group 0 (all jobs): 00:09:36.277 READ: bw=37.0MiB/s (38.8MB/s), 3866KiB/s-13.3MiB/s (3959kB/s-14.0MB/s), io=142MiB (148MB), run=2981-3828msec 00:09:36.277 00:09:36.277 Disk stats (read/write): 00:09:36.277 nvme0n1: ios=11229/0, merge=0/0, ticks=3504/0, in_queue=3504, util=99.23% 00:09:36.277 nvme0n2: ios=9705/0, merge=0/0, ticks=3983/0, in_queue=3983, util=98.15% 00:09:36.277 nvme0n3: ios=10737/0, merge=0/0, ticks=2849/0, in_queue=2849, util=96.01% 00:09:36.277 nvme0n4: ios=2877/0, merge=0/0, ticks=2764/0, in_queue=2764, util=96.75% 00:09:36.534 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.534 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:36.792 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:36.792 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:37.050 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.050 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:37.308 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:37.308 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:37.566 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:37.566 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3256933 00:09:37.567 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:37.567 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:37.825 nvmf hotplug test: fio failed as expected 00:09:37.825 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.082 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:38.082 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:38.082 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:38.082 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:38.082 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:38.082 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.082 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.083 rmmod nvme_tcp 00:09:38.083 rmmod nvme_fabrics 00:09:38.083 rmmod nvme_keyring 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3254408 ']' 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3254408 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3254408 ']' 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3254408 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3254408 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3254408' 00:09:38.083 killing process with pid 3254408 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3254408 00:09:38.083 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3254408 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.342 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.881 00:09:40.881 real 0m24.412s 00:09:40.881 user 1m25.611s 00:09:40.881 sys 0m7.588s 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.881 ************************************ 00:09:40.881 END TEST nvmf_fio_target 00:09:40.881 ************************************ 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.881 ************************************ 00:09:40.881 START TEST nvmf_bdevio 00:09:40.881 ************************************ 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:40.881 * Looking for test storage... 00:09:40.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.881 --rc genhtml_branch_coverage=1 00:09:40.881 --rc genhtml_function_coverage=1 00:09:40.881 --rc genhtml_legend=1 00:09:40.881 --rc geninfo_all_blocks=1 00:09:40.881 --rc geninfo_unexecuted_blocks=1 00:09:40.881 00:09:40.881 ' 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.881 --rc genhtml_branch_coverage=1 00:09:40.881 --rc genhtml_function_coverage=1 00:09:40.881 --rc genhtml_legend=1 00:09:40.881 --rc geninfo_all_blocks=1 00:09:40.881 --rc geninfo_unexecuted_blocks=1 00:09:40.881 00:09:40.881 ' 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.881 --rc genhtml_branch_coverage=1 00:09:40.881 --rc genhtml_function_coverage=1 00:09:40.881 --rc genhtml_legend=1 00:09:40.881 --rc geninfo_all_blocks=1 00:09:40.881 --rc geninfo_unexecuted_blocks=1 00:09:40.881 00:09:40.881 ' 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:40.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.881 --rc genhtml_branch_coverage=1 00:09:40.881 --rc genhtml_function_coverage=1 00:09:40.881 --rc genhtml_legend=1 00:09:40.881 --rc geninfo_all_blocks=1 00:09:40.881 --rc geninfo_unexecuted_blocks=1 00:09:40.881 00:09:40.881 ' 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.881 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:40.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:40.882 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:42.784 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:42.784 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:42.784 Found net devices under 0000:09:00.0: cvl_0_0 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:42.784 Found net devices under 0000:09:00.1: cvl_0_1 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.784 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:43.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:09:43.042 00:09:43.042 --- 10.0.0.2 ping statistics --- 00:09:43.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.042 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:09:43.042 00:09:43.042 --- 10.0.0.1 ping statistics --- 00:09:43.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.042 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3259924 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:43.042 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3259924 00:09:43.043 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3259924 ']' 00:09:43.043 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.043 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:43.043 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.043 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:43.043 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.043 [2024-11-20 09:00:19.927916] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:09:43.043 [2024-11-20 09:00:19.928013] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.043 [2024-11-20 09:00:19.997719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.043 [2024-11-20 09:00:20.061184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.043 [2024-11-20 09:00:20.061236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.043 [2024-11-20 09:00:20.061263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.043 [2024-11-20 09:00:20.061274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.043 [2024-11-20 09:00:20.061296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.043 [2024-11-20 09:00:20.063059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:43.043 [2024-11-20 09:00:20.063121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:43.043 [2024-11-20 09:00:20.063228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:43.043 [2024-11-20 09:00:20.063237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 [2024-11-20 09:00:20.205146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 Malloc0 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.301 [2024-11-20 09:00:20.264133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.301 { 00:09:43.301 "params": { 00:09:43.301 "name": "Nvme$subsystem", 00:09:43.301 "trtype": "$TEST_TRANSPORT", 00:09:43.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.301 "adrfam": "ipv4", 00:09:43.301 "trsvcid": "$NVMF_PORT", 00:09:43.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.301 "hdgst": ${hdgst:-false}, 00:09:43.301 "ddgst": ${ddgst:-false} 00:09:43.301 }, 00:09:43.301 "method": "bdev_nvme_attach_controller" 00:09:43.301 } 00:09:43.301 EOF 00:09:43.301 )") 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:43.301 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.301 "params": { 00:09:43.301 "name": "Nvme1", 00:09:43.301 "trtype": "tcp", 00:09:43.301 "traddr": "10.0.0.2", 00:09:43.301 "adrfam": "ipv4", 00:09:43.301 "trsvcid": "4420", 00:09:43.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.301 "hdgst": false, 00:09:43.301 "ddgst": false 00:09:43.301 }, 00:09:43.301 "method": "bdev_nvme_attach_controller" 00:09:43.301 }' 00:09:43.301 [2024-11-20 09:00:20.313166] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:09:43.301 [2024-11-20 09:00:20.313240] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259952 ] 00:09:43.559 [2024-11-20 09:00:20.383315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.559 [2024-11-20 09:00:20.450336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.559 [2024-11-20 09:00:20.450363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.559 [2024-11-20 09:00:20.450367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.817 I/O targets: 00:09:43.817 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:43.817 00:09:43.817 00:09:43.817 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.817 http://cunit.sourceforge.net/ 00:09:43.817 00:09:43.817 00:09:43.817 Suite: bdevio tests on: Nvme1n1 00:09:43.817 Test: blockdev write read block ...passed 00:09:44.075 Test: blockdev write zeroes read block ...passed 00:09:44.075 Test: blockdev write zeroes read no split ...passed 00:09:44.075 Test: blockdev write zeroes read split ...passed 00:09:44.075 Test: blockdev write zeroes read split partial ...passed 00:09:44.075 Test: blockdev reset ...[2024-11-20 09:00:20.910241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:44.075 [2024-11-20 09:00:20.910366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2190640 (9): Bad file descriptor 00:09:44.075 [2024-11-20 09:00:20.926202] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:44.075 passed 00:09:44.075 Test: blockdev write read 8 blocks ...passed 00:09:44.075 Test: blockdev write read size > 128k ...passed 00:09:44.075 Test: blockdev write read invalid size ...passed 00:09:44.075 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.075 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.075 Test: blockdev write read max offset ...passed 00:09:44.075 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.075 Test: blockdev writev readv 8 blocks ...passed 00:09:44.075 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.333 Test: blockdev writev readv block ...passed 00:09:44.333 Test: blockdev writev readv size > 128k ...passed 00:09:44.333 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.333 Test: blockdev comparev and writev ...[2024-11-20 09:00:21.140664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.333 [2024-11-20 09:00:21.140702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:44.333 [2024-11-20 09:00:21.140728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.333 [2024-11-20 09:00:21.140745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:44.333 [2024-11-20 09:00:21.141080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.333 [2024-11-20 09:00:21.141105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:44.333 [2024-11-20 09:00:21.141127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.333 [2024-11-20 09:00:21.141144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:44.333 [2024-11-20 09:00:21.141469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.333 [2024-11-20 09:00:21.141494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:44.333 [2024-11-20 09:00:21.141515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.333 [2024-11-20 09:00:21.141532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:44.334 [2024-11-20 09:00:21.141853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.334 [2024-11-20 09:00:21.141877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:44.334 [2024-11-20 09:00:21.141898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.334 [2024-11-20 09:00:21.141914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:44.334 passed 00:09:44.334 Test: blockdev nvme passthru rw ...passed 00:09:44.334 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:00:21.224559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.334 [2024-11-20 09:00:21.224588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:44.334 [2024-11-20 09:00:21.224753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.334 [2024-11-20 09:00:21.224777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:44.334 [2024-11-20 09:00:21.224935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.334 [2024-11-20 09:00:21.224959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:44.334 [2024-11-20 09:00:21.225116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:44.334 [2024-11-20 09:00:21.225141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:44.334 passed 00:09:44.334 Test: blockdev nvme admin passthru ...passed 00:09:44.334 Test: blockdev copy ...passed 00:09:44.334 00:09:44.334 Run Summary: Type Total Ran Passed Failed Inactive 00:09:44.334 suites 1 1 n/a 0 0 00:09:44.334 tests 23 23 23 0 0 00:09:44.334 asserts 152 152 152 0 n/a 00:09:44.334 00:09:44.334 Elapsed time = 1.049 seconds 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.592 rmmod nvme_tcp 00:09:44.592 rmmod nvme_fabrics 00:09:44.592 rmmod nvme_keyring 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3259924 ']' 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3259924 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3259924 ']' 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3259924 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3259924 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3259924' 00:09:44.592 killing process with pid 3259924 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3259924 00:09:44.592 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3259924 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.852 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.388 09:00:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.388 00:09:47.388 real 0m6.438s 00:09:47.388 user 0m10.119s 00:09:47.388 sys 0m2.167s 00:09:47.388 09:00:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.388 09:00:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.388 ************************************ 00:09:47.388 END TEST nvmf_bdevio 00:09:47.388 ************************************ 00:09:47.388 09:00:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:47.388 00:09:47.388 real 3m56.712s 00:09:47.388 user 10m17.351s 00:09:47.388 sys 1m8.358s 00:09:47.388 09:00:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.389 09:00:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.389 ************************************ 00:09:47.389 END TEST nvmf_target_core 00:09:47.389 ************************************ 00:09:47.389 09:00:23 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:47.389 09:00:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:47.389 09:00:23 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.389 09:00:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.389 ************************************ 00:09:47.389 START TEST nvmf_target_extra 00:09:47.389 ************************************ 00:09:47.389 09:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:47.389 * Looking for test storage... 00:09:47.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:47.389 09:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:47.389 09:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:47.389 09:00:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:47.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.389 --rc genhtml_branch_coverage=1 00:09:47.389 --rc genhtml_function_coverage=1 00:09:47.389 --rc genhtml_legend=1 00:09:47.389 --rc geninfo_all_blocks=1 00:09:47.389 --rc geninfo_unexecuted_blocks=1 00:09:47.389 00:09:47.389 ' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:47.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.389 --rc genhtml_branch_coverage=1 00:09:47.389 --rc genhtml_function_coverage=1 00:09:47.389 --rc genhtml_legend=1 00:09:47.389 --rc geninfo_all_blocks=1 00:09:47.389 --rc geninfo_unexecuted_blocks=1 00:09:47.389 00:09:47.389 ' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:47.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.389 --rc genhtml_branch_coverage=1 00:09:47.389 --rc genhtml_function_coverage=1 00:09:47.389 --rc genhtml_legend=1 00:09:47.389 --rc geninfo_all_blocks=1 00:09:47.389 --rc geninfo_unexecuted_blocks=1 00:09:47.389 00:09:47.389 ' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:47.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.389 --rc genhtml_branch_coverage=1 00:09:47.389 --rc genhtml_function_coverage=1 00:09:47.389 --rc genhtml_legend=1 00:09:47.389 --rc geninfo_all_blocks=1 00:09:47.389 --rc geninfo_unexecuted_blocks=1 00:09:47.389 00:09:47.389 ' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:47.389 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:47.390 ************************************ 00:09:47.390 START TEST nvmf_example 00:09:47.390 ************************************ 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:47.390 * Looking for test storage... 00:09:47.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:47.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.390 --rc genhtml_branch_coverage=1 00:09:47.390 --rc genhtml_function_coverage=1 00:09:47.390 --rc genhtml_legend=1 00:09:47.390 --rc geninfo_all_blocks=1 00:09:47.390 --rc geninfo_unexecuted_blocks=1 00:09:47.390 00:09:47.390 ' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:47.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.390 --rc genhtml_branch_coverage=1 00:09:47.390 --rc genhtml_function_coverage=1 00:09:47.390 --rc genhtml_legend=1 00:09:47.390 --rc geninfo_all_blocks=1 00:09:47.390 --rc geninfo_unexecuted_blocks=1 00:09:47.390 00:09:47.390 ' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:47.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.390 --rc genhtml_branch_coverage=1 00:09:47.390 --rc genhtml_function_coverage=1 00:09:47.390 --rc genhtml_legend=1 00:09:47.390 --rc geninfo_all_blocks=1 00:09:47.390 --rc geninfo_unexecuted_blocks=1 00:09:47.390 00:09:47.390 ' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:47.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.390 --rc genhtml_branch_coverage=1 00:09:47.390 --rc genhtml_function_coverage=1 00:09:47.390 --rc genhtml_legend=1 00:09:47.390 --rc geninfo_all_blocks=1 00:09:47.390 --rc geninfo_unexecuted_blocks=1 00:09:47.390 00:09:47.390 ' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.390 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.391 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:49.920 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:49.920 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:49.920 Found net devices under 0000:09:00.0: cvl_0_0 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.920 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:49.921 Found net devices under 0000:09:00.1: cvl_0_1 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:09:49.921 00:09:49.921 --- 10.0.0.2 ping statistics --- 00:09:49.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.921 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:09:49.921 00:09:49.921 --- 10.0.0.1 ping statistics --- 00:09:49.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.921 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3262207 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3262207 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 3262207 ']' 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:49.921 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:50.856 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:03.054 Initializing NVMe Controllers 00:10:03.054 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:03.054 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:03.054 Initialization complete. Launching workers. 00:10:03.054 ======================================================== 00:10:03.054 Latency(us) 00:10:03.054 Device Information : IOPS MiB/s Average min max 00:10:03.054 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14904.30 58.22 4294.08 933.83 17311.07 00:10:03.054 ======================================================== 00:10:03.054 Total : 14904.30 58.22 4294.08 933.83 17311.07 00:10:03.054 00:10:03.054 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:03.054 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:03.054 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.054 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:03.054 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.054 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.055 rmmod nvme_tcp 00:10:03.055 rmmod nvme_fabrics 00:10:03.055 rmmod nvme_keyring 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3262207 ']' 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3262207 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 3262207 ']' 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 3262207 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3262207 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3262207' 00:10:03.055 killing process with pid 3262207 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 3262207 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 3262207 00:10:03.055 nvmf threads initialize successfully 00:10:03.055 bdev subsystem init successfully 00:10:03.055 created a nvmf target service 00:10:03.055 create targets's poll groups done 00:10:03.055 all subsystems of target started 00:10:03.055 nvmf target is running 00:10:03.055 all subsystems of target stopped 00:10:03.055 destroy targets's poll groups done 00:10:03.055 destroyed the nvmf target service 00:10:03.055 bdev subsystem finish successfully 00:10:03.055 nvmf threads destroy successfully 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.055 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:03.623 00:10:03.623 real 0m16.432s 00:10:03.623 user 0m46.022s 00:10:03.623 sys 0m3.559s 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:03.623 ************************************ 00:10:03.623 END TEST nvmf_example 00:10:03.623 ************************************ 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:03.623 ************************************ 00:10:03.623 START TEST nvmf_filesystem 00:10:03.623 ************************************ 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:03.623 * Looking for test storage... 00:10:03.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.623 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:03.886 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.887 --rc genhtml_branch_coverage=1 00:10:03.887 --rc genhtml_function_coverage=1 00:10:03.887 --rc genhtml_legend=1 00:10:03.887 --rc geninfo_all_blocks=1 00:10:03.887 --rc geninfo_unexecuted_blocks=1 00:10:03.887 00:10:03.887 ' 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.887 --rc genhtml_branch_coverage=1 00:10:03.887 --rc genhtml_function_coverage=1 00:10:03.887 --rc genhtml_legend=1 00:10:03.887 --rc geninfo_all_blocks=1 00:10:03.887 --rc geninfo_unexecuted_blocks=1 00:10:03.887 00:10:03.887 ' 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.887 --rc genhtml_branch_coverage=1 00:10:03.887 --rc genhtml_function_coverage=1 00:10:03.887 --rc genhtml_legend=1 00:10:03.887 --rc geninfo_all_blocks=1 00:10:03.887 --rc geninfo_unexecuted_blocks=1 00:10:03.887 00:10:03.887 ' 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.887 --rc genhtml_branch_coverage=1 00:10:03.887 --rc genhtml_function_coverage=1 00:10:03.887 --rc genhtml_legend=1 00:10:03.887 --rc geninfo_all_blocks=1 00:10:03.887 --rc geninfo_unexecuted_blocks=1 00:10:03.887 00:10:03.887 ' 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:03.887 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:03.888 #define SPDK_CONFIG_H 00:10:03.888 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:03.888 #define SPDK_CONFIG_APPS 1 00:10:03.888 #define SPDK_CONFIG_ARCH native 00:10:03.888 #undef SPDK_CONFIG_ASAN 00:10:03.888 #undef SPDK_CONFIG_AVAHI 00:10:03.888 #undef SPDK_CONFIG_CET 00:10:03.888 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:03.888 #define SPDK_CONFIG_COVERAGE 1 00:10:03.888 #define SPDK_CONFIG_CROSS_PREFIX 00:10:03.888 #undef SPDK_CONFIG_CRYPTO 00:10:03.888 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:03.888 #undef SPDK_CONFIG_CUSTOMOCF 00:10:03.888 #undef SPDK_CONFIG_DAOS 00:10:03.888 #define SPDK_CONFIG_DAOS_DIR 00:10:03.888 #define SPDK_CONFIG_DEBUG 1 00:10:03.888 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:03.888 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:03.888 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:03.888 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:03.888 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:03.888 #undef SPDK_CONFIG_DPDK_UADK 00:10:03.888 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:03.888 #define SPDK_CONFIG_EXAMPLES 1 00:10:03.888 #undef SPDK_CONFIG_FC 00:10:03.888 #define SPDK_CONFIG_FC_PATH 00:10:03.888 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:03.888 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:03.888 #define SPDK_CONFIG_FSDEV 1 00:10:03.888 #undef SPDK_CONFIG_FUSE 00:10:03.888 #undef SPDK_CONFIG_FUZZER 00:10:03.888 #define SPDK_CONFIG_FUZZER_LIB 00:10:03.888 #undef SPDK_CONFIG_GOLANG 00:10:03.888 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:03.888 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:03.888 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:03.888 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:03.888 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:03.888 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:03.888 #undef SPDK_CONFIG_HAVE_LZ4 00:10:03.888 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:03.888 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:03.888 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:03.888 #define SPDK_CONFIG_IDXD 1 00:10:03.888 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:03.888 #undef SPDK_CONFIG_IPSEC_MB 00:10:03.888 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:03.888 #define SPDK_CONFIG_ISAL 1 00:10:03.888 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:03.888 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:03.888 #define SPDK_CONFIG_LIBDIR 00:10:03.888 #undef SPDK_CONFIG_LTO 00:10:03.888 #define SPDK_CONFIG_MAX_LCORES 128 00:10:03.888 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:03.888 #define SPDK_CONFIG_NVME_CUSE 1 00:10:03.888 #undef SPDK_CONFIG_OCF 00:10:03.888 #define SPDK_CONFIG_OCF_PATH 00:10:03.888 #define SPDK_CONFIG_OPENSSL_PATH 00:10:03.888 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:03.888 #define SPDK_CONFIG_PGO_DIR 00:10:03.888 #undef SPDK_CONFIG_PGO_USE 00:10:03.888 #define SPDK_CONFIG_PREFIX /usr/local 00:10:03.888 #undef SPDK_CONFIG_RAID5F 00:10:03.888 #undef SPDK_CONFIG_RBD 00:10:03.888 #define SPDK_CONFIG_RDMA 1 00:10:03.888 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:03.888 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:03.888 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:03.888 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:03.888 #define SPDK_CONFIG_SHARED 1 00:10:03.888 #undef SPDK_CONFIG_SMA 00:10:03.888 #define SPDK_CONFIG_TESTS 1 00:10:03.888 #undef SPDK_CONFIG_TSAN 00:10:03.888 #define SPDK_CONFIG_UBLK 1 00:10:03.888 #define SPDK_CONFIG_UBSAN 1 00:10:03.888 #undef SPDK_CONFIG_UNIT_TESTS 00:10:03.888 #undef SPDK_CONFIG_URING 00:10:03.888 #define SPDK_CONFIG_URING_PATH 00:10:03.888 #undef SPDK_CONFIG_URING_ZNS 00:10:03.888 #undef SPDK_CONFIG_USDT 00:10:03.888 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:03.888 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:03.888 #define SPDK_CONFIG_VFIO_USER 1 00:10:03.888 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:03.888 #define SPDK_CONFIG_VHOST 1 00:10:03.888 #define SPDK_CONFIG_VIRTIO 1 00:10:03.888 #undef SPDK_CONFIG_VTUNE 00:10:03.888 #define SPDK_CONFIG_VTUNE_DIR 00:10:03.888 #define SPDK_CONFIG_WERROR 1 00:10:03.888 #define SPDK_CONFIG_WPDK_DIR 00:10:03.888 #undef SPDK_CONFIG_XNVME 00:10:03.888 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.888 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:03.889 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:03.890 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3263914 ]] 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3263914 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.xZXYls 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xZXYls/tests/target /tmp/spdk.xZXYls 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=50854027264 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988519936 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11134492672 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982893568 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:03.891 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375265280 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22441984 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=29919719424 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1074540544 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:03.892 * Looking for test storage... 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=50854027264 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=13349085184 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.892 --rc genhtml_branch_coverage=1 00:10:03.892 --rc genhtml_function_coverage=1 00:10:03.892 --rc genhtml_legend=1 00:10:03.892 --rc geninfo_all_blocks=1 00:10:03.892 --rc geninfo_unexecuted_blocks=1 00:10:03.892 00:10:03.892 ' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.892 --rc genhtml_branch_coverage=1 00:10:03.892 --rc genhtml_function_coverage=1 00:10:03.892 --rc genhtml_legend=1 00:10:03.892 --rc geninfo_all_blocks=1 00:10:03.892 --rc geninfo_unexecuted_blocks=1 00:10:03.892 00:10:03.892 ' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.892 --rc genhtml_branch_coverage=1 00:10:03.892 --rc genhtml_function_coverage=1 00:10:03.892 --rc genhtml_legend=1 00:10:03.892 --rc geninfo_all_blocks=1 00:10:03.892 --rc geninfo_unexecuted_blocks=1 00:10:03.892 00:10:03.892 ' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.892 --rc genhtml_branch_coverage=1 00:10:03.892 --rc genhtml_function_coverage=1 00:10:03.892 --rc genhtml_legend=1 00:10:03.892 --rc geninfo_all_blocks=1 00:10:03.892 --rc geninfo_unexecuted_blocks=1 00:10:03.892 00:10:03.892 ' 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.892 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:03.893 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:06.430 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:06.430 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:06.430 Found net devices under 0000:09:00.0: cvl_0_0 00:10:06.430 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:06.431 Found net devices under 0000:09:00.1: cvl_0_1 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:10:06.431 00:10:06.431 --- 10.0.0.2 ping statistics --- 00:10:06.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.431 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:10:06.431 00:10:06.431 --- 10.0.0.1 ping statistics --- 00:10:06.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.431 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:06.431 ************************************ 00:10:06.431 START TEST nvmf_filesystem_no_in_capsule 00:10:06.431 ************************************ 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3265566 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3265566 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3265566 ']' 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:06.431 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.431 [2024-11-20 09:00:43.256836] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:10:06.431 [2024-11-20 09:00:43.256929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.431 [2024-11-20 09:00:43.332330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.431 [2024-11-20 09:00:43.389409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.431 [2024-11-20 09:00:43.389463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.431 [2024-11-20 09:00:43.389486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.431 [2024-11-20 09:00:43.389497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.431 [2024-11-20 09:00:43.389507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.432 [2024-11-20 09:00:43.390947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.432 [2024-11-20 09:00:43.391078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.432 [2024-11-20 09:00:43.391195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.432 [2024-11-20 09:00:43.391204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.689 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.689 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:06.689 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.689 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.689 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.690 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.690 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:06.690 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:06.690 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.690 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.690 [2024-11-20 09:00:43.538094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.690 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.690 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:06.690 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.690 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.690 Malloc1 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.948 [2024-11-20 09:00:43.736742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:06.948 { 00:10:06.948 "name": "Malloc1", 00:10:06.948 "aliases": [ 00:10:06.948 "ffc73c86-a547-45c8-bdb6-0163132418e3" 00:10:06.948 ], 00:10:06.948 "product_name": "Malloc disk", 00:10:06.948 "block_size": 512, 00:10:06.948 "num_blocks": 1048576, 00:10:06.948 "uuid": "ffc73c86-a547-45c8-bdb6-0163132418e3", 00:10:06.948 "assigned_rate_limits": { 00:10:06.948 "rw_ios_per_sec": 0, 00:10:06.948 "rw_mbytes_per_sec": 0, 00:10:06.948 "r_mbytes_per_sec": 0, 00:10:06.948 "w_mbytes_per_sec": 0 00:10:06.948 }, 00:10:06.948 "claimed": true, 00:10:06.948 "claim_type": "exclusive_write", 00:10:06.948 "zoned": false, 00:10:06.948 "supported_io_types": { 00:10:06.948 "read": true, 00:10:06.948 "write": true, 00:10:06.948 "unmap": true, 00:10:06.948 "flush": true, 00:10:06.948 "reset": true, 00:10:06.948 "nvme_admin": false, 00:10:06.948 "nvme_io": false, 00:10:06.948 "nvme_io_md": false, 00:10:06.948 "write_zeroes": true, 00:10:06.948 "zcopy": true, 00:10:06.948 "get_zone_info": false, 00:10:06.948 "zone_management": false, 00:10:06.948 "zone_append": false, 00:10:06.948 "compare": false, 00:10:06.948 "compare_and_write": false, 00:10:06.948 "abort": true, 00:10:06.948 "seek_hole": false, 00:10:06.948 "seek_data": false, 00:10:06.948 "copy": true, 00:10:06.948 "nvme_iov_md": false 00:10:06.948 }, 00:10:06.948 "memory_domains": [ 00:10:06.948 { 00:10:06.948 "dma_device_id": "system", 00:10:06.948 "dma_device_type": 1 00:10:06.948 }, 00:10:06.948 { 00:10:06.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.948 "dma_device_type": 2 00:10:06.948 } 00:10:06.948 ], 00:10:06.948 "driver_specific": {} 00:10:06.948 } 00:10:06.948 ]' 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:06.948 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.514 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:07.514 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:07.514 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.514 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:07.514 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:10.041 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:10.299 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.232 ************************************ 00:10:11.232 START TEST filesystem_ext4 00:10:11.232 ************************************ 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:11.232 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:11.232 mke2fs 1.47.0 (5-Feb-2023) 00:10:11.232 Discarding device blocks: 0/522240 done 00:10:11.232 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:11.232 Filesystem UUID: d95fea43-fbc9-439b-a280-2dbfa50b47fe 00:10:11.232 Superblock backups stored on blocks: 00:10:11.232 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:11.232 00:10:11.232 Allocating group tables: 0/64 done 00:10:11.232 Writing inode tables: 0/64 done 00:10:11.492 Creating journal (8192 blocks): done 00:10:13.704 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:10:13.704 00:10:13.704 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:13.704 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3265566 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:20.320 00:10:20.320 real 0m8.120s 00:10:20.320 user 0m0.013s 00:10:20.320 sys 0m0.065s 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:20.320 ************************************ 00:10:20.320 END TEST filesystem_ext4 00:10:20.320 ************************************ 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:20.320 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.321 ************************************ 00:10:20.321 START TEST filesystem_btrfs 00:10:20.321 ************************************ 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:20.321 btrfs-progs v6.8.1 00:10:20.321 See https://btrfs.readthedocs.io for more information. 00:10:20.321 00:10:20.321 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:20.321 NOTE: several default settings have changed in version 5.15, please make sure 00:10:20.321 this does not affect your deployments: 00:10:20.321 - DUP for metadata (-m dup) 00:10:20.321 - enabled no-holes (-O no-holes) 00:10:20.321 - enabled free-space-tree (-R free-space-tree) 00:10:20.321 00:10:20.321 Label: (null) 00:10:20.321 UUID: 3888e37c-ffa3-4f83-9a33-cbce0138a50f 00:10:20.321 Node size: 16384 00:10:20.321 Sector size: 4096 (CPU page size: 4096) 00:10:20.321 Filesystem size: 510.00MiB 00:10:20.321 Block group profiles: 00:10:20.321 Data: single 8.00MiB 00:10:20.321 Metadata: DUP 32.00MiB 00:10:20.321 System: DUP 8.00MiB 00:10:20.321 SSD detected: yes 00:10:20.321 Zoned device: no 00:10:20.321 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:20.321 Checksum: crc32c 00:10:20.321 Number of devices: 1 00:10:20.321 Devices: 00:10:20.321 ID SIZE PATH 00:10:20.321 1 510.00MiB /dev/nvme0n1p1 00:10:20.321 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3265566 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:20.321 00:10:20.321 real 0m0.627s 00:10:20.321 user 0m0.019s 00:10:20.321 sys 0m0.106s 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:20.321 ************************************ 00:10:20.321 END TEST filesystem_btrfs 00:10:20.321 ************************************ 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.321 ************************************ 00:10:20.321 START TEST filesystem_xfs 00:10:20.321 ************************************ 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:20.321 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:20.322 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:20.322 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:20.322 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:20.322 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:20.579 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:20.579 = sectsz=512 attr=2, projid32bit=1 00:10:20.579 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:20.579 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:20.579 data = bsize=4096 blocks=130560, imaxpct=25 00:10:20.579 = sunit=0 swidth=0 blks 00:10:20.579 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:20.579 log =internal log bsize=4096 blocks=16384, version=2 00:10:20.579 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:20.579 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:21.512 Discarding blocks...Done. 00:10:21.512 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:21.512 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3265566 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:23.410 00:10:23.410 real 0m3.273s 00:10:23.410 user 0m0.018s 00:10:23.410 sys 0m0.056s 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:23.410 ************************************ 00:10:23.410 END TEST filesystem_xfs 00:10:23.410 ************************************ 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:23.410 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3265566 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3265566 ']' 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3265566 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3265566 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3265566' 00:10:23.667 killing process with pid 3265566 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 3265566 00:10:23.667 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 3265566 00:10:23.926 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:23.926 00:10:23.926 real 0m17.744s 00:10:23.926 user 1m8.810s 00:10:23.926 sys 0m2.060s 00:10:23.926 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.926 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.926 ************************************ 00:10:23.926 END TEST nvmf_filesystem_no_in_capsule 00:10:23.926 ************************************ 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 ************************************ 00:10:24.185 START TEST nvmf_filesystem_in_capsule 00:10:24.185 ************************************ 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.185 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3267916 00:10:24.185 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:24.185 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3267916 00:10:24.185 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 3267916 ']' 00:10:24.185 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.185 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:24.185 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.185 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:24.185 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 [2024-11-20 09:01:01.053296] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:10:24.185 [2024-11-20 09:01:01.053398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.185 [2024-11-20 09:01:01.122608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.185 [2024-11-20 09:01:01.177492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.185 [2024-11-20 09:01:01.177549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.185 [2024-11-20 09:01:01.177577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.185 [2024-11-20 09:01:01.177606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.185 [2024-11-20 09:01:01.177617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.185 [2024-11-20 09:01:01.179125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.185 [2024-11-20 09:01:01.179234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.185 [2024-11-20 09:01:01.179332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.185 [2024-11-20 09:01:01.179337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.443 [2024-11-20 09:01:01.320914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.443 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.701 Malloc1 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.701 [2024-11-20 09:01:01.492745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.701 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:24.701 { 00:10:24.701 "name": "Malloc1", 00:10:24.701 "aliases": [ 00:10:24.701 "07673713-8081-4f6a-abe2-f1a76f4054d0" 00:10:24.701 ], 00:10:24.701 "product_name": "Malloc disk", 00:10:24.701 "block_size": 512, 00:10:24.701 "num_blocks": 1048576, 00:10:24.702 "uuid": "07673713-8081-4f6a-abe2-f1a76f4054d0", 00:10:24.702 "assigned_rate_limits": { 00:10:24.702 "rw_ios_per_sec": 0, 00:10:24.702 "rw_mbytes_per_sec": 0, 00:10:24.702 "r_mbytes_per_sec": 0, 00:10:24.702 "w_mbytes_per_sec": 0 00:10:24.702 }, 00:10:24.702 "claimed": true, 00:10:24.702 "claim_type": "exclusive_write", 00:10:24.702 "zoned": false, 00:10:24.702 "supported_io_types": { 00:10:24.702 "read": true, 00:10:24.702 "write": true, 00:10:24.702 "unmap": true, 00:10:24.702 "flush": true, 00:10:24.702 "reset": true, 00:10:24.702 "nvme_admin": false, 00:10:24.702 "nvme_io": false, 00:10:24.702 "nvme_io_md": false, 00:10:24.702 "write_zeroes": true, 00:10:24.702 "zcopy": true, 00:10:24.702 "get_zone_info": false, 00:10:24.702 "zone_management": false, 00:10:24.702 "zone_append": false, 00:10:24.702 "compare": false, 00:10:24.702 "compare_and_write": false, 00:10:24.702 "abort": true, 00:10:24.702 "seek_hole": false, 00:10:24.702 "seek_data": false, 00:10:24.702 "copy": true, 00:10:24.702 "nvme_iov_md": false 00:10:24.702 }, 00:10:24.702 "memory_domains": [ 00:10:24.702 { 00:10:24.702 "dma_device_id": "system", 00:10:24.702 "dma_device_type": 1 00:10:24.702 }, 00:10:24.702 { 00:10:24.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.702 "dma_device_type": 2 00:10:24.702 } 00:10:24.702 ], 00:10:24.702 "driver_specific": {} 00:10:24.702 } 00:10:24.702 ]' 00:10:24.702 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:24.702 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:24.702 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:24.702 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:24.702 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:24.702 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:24.702 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:24.702 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:25.267 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:25.267 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:25.267 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.267 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:25.267 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:27.793 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:28.358 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:29.292 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:29.292 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.293 ************************************ 00:10:29.293 START TEST filesystem_in_capsule_ext4 00:10:29.293 ************************************ 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:29.293 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:29.293 mke2fs 1.47.0 (5-Feb-2023) 00:10:29.293 Discarding device blocks: 0/522240 done 00:10:29.550 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:29.550 Filesystem UUID: d8533533-d0d5-4160-8eb5-6fac242f732d 00:10:29.550 Superblock backups stored on blocks: 00:10:29.550 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:29.550 00:10:29.550 Allocating group tables: 0/64 done 00:10:29.550 Writing inode tables: 0/64 done 00:10:32.077 Creating journal (8192 blocks): done 00:10:34.384 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:10:34.384 00:10:34.384 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:34.384 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3267916 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:40.939 00:10:40.939 real 0m10.908s 00:10:40.939 user 0m0.016s 00:10:40.939 sys 0m0.063s 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:40.939 ************************************ 00:10:40.939 END TEST filesystem_in_capsule_ext4 00:10:40.939 ************************************ 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.939 ************************************ 00:10:40.939 START TEST filesystem_in_capsule_btrfs 00:10:40.939 ************************************ 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:40.939 btrfs-progs v6.8.1 00:10:40.939 See https://btrfs.readthedocs.io for more information. 00:10:40.939 00:10:40.939 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:40.939 NOTE: several default settings have changed in version 5.15, please make sure 00:10:40.939 this does not affect your deployments: 00:10:40.939 - DUP for metadata (-m dup) 00:10:40.939 - enabled no-holes (-O no-holes) 00:10:40.939 - enabled free-space-tree (-R free-space-tree) 00:10:40.939 00:10:40.939 Label: (null) 00:10:40.939 UUID: b00a021b-50d3-4747-9962-b71fc63846f0 00:10:40.939 Node size: 16384 00:10:40.939 Sector size: 4096 (CPU page size: 4096) 00:10:40.939 Filesystem size: 510.00MiB 00:10:40.939 Block group profiles: 00:10:40.939 Data: single 8.00MiB 00:10:40.939 Metadata: DUP 32.00MiB 00:10:40.939 System: DUP 8.00MiB 00:10:40.939 SSD detected: yes 00:10:40.939 Zoned device: no 00:10:40.939 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:40.939 Checksum: crc32c 00:10:40.939 Number of devices: 1 00:10:40.939 Devices: 00:10:40.939 ID SIZE PATH 00:10:40.939 1 510.00MiB /dev/nvme0n1p1 00:10:40.939 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:40.939 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3267916 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.872 00:10:41.872 real 0m1.609s 00:10:41.872 user 0m0.019s 00:10:41.872 sys 0m0.098s 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:41.872 ************************************ 00:10:41.872 END TEST filesystem_in_capsule_btrfs 00:10:41.872 ************************************ 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.872 ************************************ 00:10:41.872 START TEST filesystem_in_capsule_xfs 00:10:41.872 ************************************ 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:41.872 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:42.130 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:42.130 = sectsz=512 attr=2, projid32bit=1 00:10:42.130 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:42.130 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:42.130 data = bsize=4096 blocks=130560, imaxpct=25 00:10:42.130 = sunit=0 swidth=0 blks 00:10:42.130 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:42.130 log =internal log bsize=4096 blocks=16384, version=2 00:10:42.130 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:42.130 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.503 Discarding blocks...Done. 00:10:43.503 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:43.503 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:44.875 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.133 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:45.133 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.133 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:45.133 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:45.133 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3267916 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.133 00:10:45.133 real 0m3.231s 00:10:45.133 user 0m0.013s 00:10:45.133 sys 0m0.069s 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.133 ************************************ 00:10:45.133 END TEST filesystem_in_capsule_xfs 00:10:45.133 ************************************ 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:45.133 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3267916 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 3267916 ']' 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 3267916 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3267916 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3267916' 00:10:45.391 killing process with pid 3267916 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 3267916 00:10:45.391 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 3267916 00:10:45.958 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:45.958 00:10:45.958 real 0m21.708s 00:10:45.958 user 1m24.250s 00:10:45.958 sys 0m2.436s 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.959 ************************************ 00:10:45.959 END TEST nvmf_filesystem_in_capsule 00:10:45.959 ************************************ 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.959 rmmod nvme_tcp 00:10:45.959 rmmod nvme_fabrics 00:10:45.959 rmmod nvme_keyring 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.959 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.860 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.860 00:10:47.860 real 0m44.260s 00:10:47.860 user 2m34.140s 00:10:47.860 sys 0m6.256s 00:10:47.860 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.860 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.860 ************************************ 00:10:47.860 END TEST nvmf_filesystem 00:10:47.860 ************************************ 00:10:47.860 09:01:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:47.860 09:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:47.860 09:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:47.860 09:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:47.860 ************************************ 00:10:47.860 START TEST nvmf_target_discovery 00:10:47.860 ************************************ 00:10:47.860 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:48.119 * Looking for test storage... 00:10:48.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.119 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:48.119 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:10:48.119 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.119 --rc genhtml_branch_coverage=1 00:10:48.119 --rc genhtml_function_coverage=1 00:10:48.119 --rc genhtml_legend=1 00:10:48.119 --rc geninfo_all_blocks=1 00:10:48.119 --rc geninfo_unexecuted_blocks=1 00:10:48.119 00:10:48.119 ' 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.119 --rc genhtml_branch_coverage=1 00:10:48.119 --rc genhtml_function_coverage=1 00:10:48.119 --rc genhtml_legend=1 00:10:48.119 --rc geninfo_all_blocks=1 00:10:48.119 --rc geninfo_unexecuted_blocks=1 00:10:48.119 00:10:48.119 ' 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.119 --rc genhtml_branch_coverage=1 00:10:48.119 --rc genhtml_function_coverage=1 00:10:48.119 --rc genhtml_legend=1 00:10:48.119 --rc geninfo_all_blocks=1 00:10:48.119 --rc geninfo_unexecuted_blocks=1 00:10:48.119 00:10:48.119 ' 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.119 --rc genhtml_branch_coverage=1 00:10:48.119 --rc genhtml_function_coverage=1 00:10:48.119 --rc genhtml_legend=1 00:10:48.119 --rc geninfo_all_blocks=1 00:10:48.119 --rc geninfo_unexecuted_blocks=1 00:10:48.119 00:10:48.119 ' 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.119 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:48.120 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:50.679 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:50.680 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:50.680 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:50.680 Found net devices under 0000:09:00.0: cvl_0_0 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:50.680 Found net devices under 0000:09:00.1: cvl_0_1 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:10:50.680 00:10:50.680 --- 10.0.0.2 ping statistics --- 00:10:50.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.680 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:10:50.680 00:10:50.680 --- 10.0.0.1 ping statistics --- 00:10:50.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.680 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.680 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3272614 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3272614 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 3272614 ']' 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.681 [2024-11-20 09:01:27.407297] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:10:50.681 [2024-11-20 09:01:27.407405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.681 [2024-11-20 09:01:27.489891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.681 [2024-11-20 09:01:27.552731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.681 [2024-11-20 09:01:27.552779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.681 [2024-11-20 09:01:27.552808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.681 [2024-11-20 09:01:27.552820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.681 [2024-11-20 09:01:27.552829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.681 [2024-11-20 09:01:27.554461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.681 [2024-11-20 09:01:27.554489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.681 [2024-11-20 09:01:27.554545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.681 [2024-11-20 09:01:27.554548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:50.681 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 [2024-11-20 09:01:27.710107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 Null1 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 [2024-11-20 09:01:27.750431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 Null2 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 Null3 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:50.939 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.940 Null4 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.940 09:01:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:10:51.198 00:10:51.198 Discovery Log Number of Records 6, Generation counter 6 00:10:51.198 =====Discovery Log Entry 0====== 00:10:51.198 trtype: tcp 00:10:51.198 adrfam: ipv4 00:10:51.198 subtype: current discovery subsystem 00:10:51.198 treq: not required 00:10:51.198 portid: 0 00:10:51.198 trsvcid: 4420 00:10:51.198 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:51.198 traddr: 10.0.0.2 00:10:51.198 eflags: explicit discovery connections, duplicate discovery information 00:10:51.198 sectype: none 00:10:51.198 =====Discovery Log Entry 1====== 00:10:51.198 trtype: tcp 00:10:51.198 adrfam: ipv4 00:10:51.198 subtype: nvme subsystem 00:10:51.198 treq: not required 00:10:51.198 portid: 0 00:10:51.198 trsvcid: 4420 00:10:51.198 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:51.198 traddr: 10.0.0.2 00:10:51.198 eflags: none 00:10:51.198 sectype: none 00:10:51.198 =====Discovery Log Entry 2====== 00:10:51.198 trtype: tcp 00:10:51.198 adrfam: ipv4 00:10:51.198 subtype: nvme subsystem 00:10:51.198 treq: not required 00:10:51.198 portid: 0 00:10:51.198 trsvcid: 4420 00:10:51.198 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:51.198 traddr: 10.0.0.2 00:10:51.198 eflags: none 00:10:51.198 sectype: none 00:10:51.198 =====Discovery Log Entry 3====== 00:10:51.198 trtype: tcp 00:10:51.198 adrfam: ipv4 00:10:51.198 subtype: nvme subsystem 00:10:51.198 treq: not required 00:10:51.198 portid: 0 00:10:51.198 trsvcid: 4420 00:10:51.198 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:51.198 traddr: 10.0.0.2 00:10:51.198 eflags: none 00:10:51.198 sectype: none 00:10:51.198 =====Discovery Log Entry 4====== 00:10:51.198 trtype: tcp 00:10:51.198 adrfam: ipv4 00:10:51.198 subtype: nvme subsystem 00:10:51.198 treq: not required 00:10:51.198 portid: 0 00:10:51.198 trsvcid: 4420 00:10:51.198 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:51.198 traddr: 10.0.0.2 00:10:51.198 eflags: none 00:10:51.198 sectype: none 00:10:51.198 =====Discovery Log Entry 5====== 00:10:51.198 trtype: tcp 00:10:51.198 adrfam: ipv4 00:10:51.198 subtype: discovery subsystem referral 00:10:51.198 treq: not required 00:10:51.198 portid: 0 00:10:51.198 trsvcid: 4430 00:10:51.198 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:51.198 traddr: 10.0.0.2 00:10:51.198 eflags: none 00:10:51.198 sectype: none 00:10:51.198 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:51.198 Perform nvmf subsystem discovery via RPC 00:10:51.198 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:51.198 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.198 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.198 [ 00:10:51.198 { 00:10:51.198 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:51.198 "subtype": "Discovery", 00:10:51.198 "listen_addresses": [ 00:10:51.198 { 00:10:51.198 "trtype": "TCP", 00:10:51.198 "adrfam": "IPv4", 00:10:51.198 "traddr": "10.0.0.2", 00:10:51.198 "trsvcid": "4420" 00:10:51.198 } 00:10:51.198 ], 00:10:51.198 "allow_any_host": true, 00:10:51.198 "hosts": [] 00:10:51.198 }, 00:10:51.198 { 00:10:51.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.198 "subtype": "NVMe", 00:10:51.198 "listen_addresses": [ 00:10:51.198 { 00:10:51.198 "trtype": "TCP", 00:10:51.198 "adrfam": "IPv4", 00:10:51.198 "traddr": "10.0.0.2", 00:10:51.198 "trsvcid": "4420" 00:10:51.198 } 00:10:51.198 ], 00:10:51.198 "allow_any_host": true, 00:10:51.198 "hosts": [], 00:10:51.198 "serial_number": "SPDK00000000000001", 00:10:51.198 "model_number": "SPDK bdev Controller", 00:10:51.198 "max_namespaces": 32, 00:10:51.198 "min_cntlid": 1, 00:10:51.198 "max_cntlid": 65519, 00:10:51.198 "namespaces": [ 00:10:51.198 { 00:10:51.198 "nsid": 1, 00:10:51.198 "bdev_name": "Null1", 00:10:51.198 "name": "Null1", 00:10:51.198 "nguid": "EA8FCBE7982146FB86D18B1E7D931808", 00:10:51.198 "uuid": "ea8fcbe7-9821-46fb-86d1-8b1e7d931808" 00:10:51.198 } 00:10:51.198 ] 00:10:51.198 }, 00:10:51.198 { 00:10:51.198 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:51.198 "subtype": "NVMe", 00:10:51.198 "listen_addresses": [ 00:10:51.198 { 00:10:51.198 "trtype": "TCP", 00:10:51.198 "adrfam": "IPv4", 00:10:51.198 "traddr": "10.0.0.2", 00:10:51.198 "trsvcid": "4420" 00:10:51.198 } 00:10:51.198 ], 00:10:51.198 "allow_any_host": true, 00:10:51.198 "hosts": [], 00:10:51.198 "serial_number": "SPDK00000000000002", 00:10:51.198 "model_number": "SPDK bdev Controller", 00:10:51.198 "max_namespaces": 32, 00:10:51.198 "min_cntlid": 1, 00:10:51.198 "max_cntlid": 65519, 00:10:51.198 "namespaces": [ 00:10:51.198 { 00:10:51.198 "nsid": 1, 00:10:51.198 "bdev_name": "Null2", 00:10:51.198 "name": "Null2", 00:10:51.198 "nguid": "52ACEA3A31BF4FBD8D36BBADA2497175", 00:10:51.198 "uuid": "52acea3a-31bf-4fbd-8d36-bbada2497175" 00:10:51.198 } 00:10:51.198 ] 00:10:51.198 }, 00:10:51.198 { 00:10:51.198 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:51.198 "subtype": "NVMe", 00:10:51.198 "listen_addresses": [ 00:10:51.198 { 00:10:51.198 "trtype": "TCP", 00:10:51.198 "adrfam": "IPv4", 00:10:51.198 "traddr": "10.0.0.2", 00:10:51.198 "trsvcid": "4420" 00:10:51.198 } 00:10:51.198 ], 00:10:51.198 "allow_any_host": true, 00:10:51.198 "hosts": [], 00:10:51.198 "serial_number": "SPDK00000000000003", 00:10:51.198 "model_number": "SPDK bdev Controller", 00:10:51.198 "max_namespaces": 32, 00:10:51.198 "min_cntlid": 1, 00:10:51.198 "max_cntlid": 65519, 00:10:51.198 "namespaces": [ 00:10:51.198 { 00:10:51.198 "nsid": 1, 00:10:51.198 "bdev_name": "Null3", 00:10:51.198 "name": "Null3", 00:10:51.198 "nguid": "037E16A442A1485DA2F6B8D34B2A5183", 00:10:51.198 "uuid": "037e16a4-42a1-485d-a2f6-b8d34b2a5183" 00:10:51.198 } 00:10:51.198 ] 00:10:51.198 }, 00:10:51.198 { 00:10:51.198 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:51.198 "subtype": "NVMe", 00:10:51.198 "listen_addresses": [ 00:10:51.198 { 00:10:51.198 "trtype": "TCP", 00:10:51.198 "adrfam": "IPv4", 00:10:51.198 "traddr": "10.0.0.2", 00:10:51.198 "trsvcid": "4420" 00:10:51.198 } 00:10:51.198 ], 00:10:51.198 "allow_any_host": true, 00:10:51.198 "hosts": [], 00:10:51.198 "serial_number": "SPDK00000000000004", 00:10:51.198 "model_number": "SPDK bdev Controller", 00:10:51.198 "max_namespaces": 32, 00:10:51.198 "min_cntlid": 1, 00:10:51.199 "max_cntlid": 65519, 00:10:51.199 "namespaces": [ 00:10:51.199 { 00:10:51.199 "nsid": 1, 00:10:51.199 "bdev_name": "Null4", 00:10:51.199 "name": "Null4", 00:10:51.199 "nguid": "3E55809BE9734C5D88E1FED0ECBFDE61", 00:10:51.199 "uuid": "3e55809b-e973-4c5d-88e1-fed0ecbfde61" 00:10:51.199 } 00:10:51.199 ] 00:10:51.199 } 00:10:51.199 ] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.199 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.199 rmmod nvme_tcp 00:10:51.199 rmmod nvme_fabrics 00:10:51.456 rmmod nvme_keyring 00:10:51.456 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.456 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:51.456 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:51.456 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3272614 ']' 00:10:51.456 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3272614 00:10:51.456 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 3272614 ']' 00:10:51.456 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 3272614 00:10:51.456 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:10:51.456 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:51.457 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3272614 00:10:51.457 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:51.457 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:51.457 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3272614' 00:10:51.457 killing process with pid 3272614 00:10:51.457 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 3272614 00:10:51.457 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 3272614 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.715 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.624 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.624 00:10:53.624 real 0m5.693s 00:10:53.624 user 0m4.795s 00:10:53.624 sys 0m2.029s 00:10:53.624 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:53.624 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:53.624 ************************************ 00:10:53.624 END TEST nvmf_target_discovery 00:10:53.624 ************************************ 00:10:53.624 09:01:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:53.624 09:01:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:53.624 09:01:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.624 09:01:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:53.624 ************************************ 00:10:53.624 START TEST nvmf_referrals 00:10:53.624 ************************************ 00:10:53.624 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:53.883 * Looking for test storage... 00:10:53.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:53.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.883 --rc genhtml_branch_coverage=1 00:10:53.883 --rc genhtml_function_coverage=1 00:10:53.883 --rc genhtml_legend=1 00:10:53.883 --rc geninfo_all_blocks=1 00:10:53.883 --rc geninfo_unexecuted_blocks=1 00:10:53.883 00:10:53.883 ' 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:53.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.883 --rc genhtml_branch_coverage=1 00:10:53.883 --rc genhtml_function_coverage=1 00:10:53.883 --rc genhtml_legend=1 00:10:53.883 --rc geninfo_all_blocks=1 00:10:53.883 --rc geninfo_unexecuted_blocks=1 00:10:53.883 00:10:53.883 ' 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:53.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.883 --rc genhtml_branch_coverage=1 00:10:53.883 --rc genhtml_function_coverage=1 00:10:53.883 --rc genhtml_legend=1 00:10:53.883 --rc geninfo_all_blocks=1 00:10:53.883 --rc geninfo_unexecuted_blocks=1 00:10:53.883 00:10:53.883 ' 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:53.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.883 --rc genhtml_branch_coverage=1 00:10:53.883 --rc genhtml_function_coverage=1 00:10:53.883 --rc genhtml_legend=1 00:10:53.883 --rc geninfo_all_blocks=1 00:10:53.883 --rc geninfo_unexecuted_blocks=1 00:10:53.883 00:10:53.883 ' 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.883 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.884 09:01:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.414 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:56.415 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:56.415 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:56.415 Found net devices under 0000:09:00.0: cvl_0_0 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:56.415 Found net devices under 0000:09:00.1: cvl_0_1 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.415 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:10:56.415 00:10:56.415 --- 10.0.0.2 ping statistics --- 00:10:56.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.415 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:10:56.415 00:10:56.415 --- 10.0.0.1 ping statistics --- 00:10:56.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.415 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3274718 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3274718 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 3274718 ']' 00:10:56.415 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.416 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:56.416 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.416 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:56.416 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.416 [2024-11-20 09:01:33.183207] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:10:56.416 [2024-11-20 09:01:33.183310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.416 [2024-11-20 09:01:33.252991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.416 [2024-11-20 09:01:33.307804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.416 [2024-11-20 09:01:33.307859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.416 [2024-11-20 09:01:33.307886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.416 [2024-11-20 09:01:33.307897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.416 [2024-11-20 09:01:33.307906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.416 [2024-11-20 09:01:33.309536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.416 [2024-11-20 09:01:33.309606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.416 [2024-11-20 09:01:33.309664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.416 [2024-11-20 09:01:33.309667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.416 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:56.416 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:10:56.416 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.416 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.416 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 [2024-11-20 09:01:33.457422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 [2024-11-20 09:01:33.469654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.736 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:56.737 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.051 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.052 09:01:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.052 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.309 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:57.567 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:57.567 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:57.567 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:57.567 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:57.567 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.567 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:57.824 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:57.824 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:57.824 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.824 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:57.825 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:58.082 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:58.083 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:58.083 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:58.083 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:58.083 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:58.083 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:58.083 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:58.083 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:58.083 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:58.083 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:58.083 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:58.083 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:58.083 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:58.340 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.598 rmmod nvme_tcp 00:10:58.598 rmmod nvme_fabrics 00:10:58.598 rmmod nvme_keyring 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3274718 ']' 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3274718 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 3274718 ']' 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 3274718 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:58.598 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3274718 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3274718' 00:10:58.856 killing process with pid 3274718 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 3274718 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 3274718 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.856 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.394 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.394 00:11:01.394 real 0m7.278s 00:11:01.394 user 0m11.668s 00:11:01.394 sys 0m2.358s 00:11:01.394 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:01.394 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:01.394 ************************************ 00:11:01.394 END TEST nvmf_referrals 00:11:01.394 ************************************ 00:11:01.394 09:01:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:01.394 09:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:01.394 09:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:01.394 09:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.394 ************************************ 00:11:01.394 START TEST nvmf_connect_disconnect 00:11:01.394 ************************************ 00:11:01.394 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:01.394 * Looking for test storage... 00:11:01.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:01.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.394 --rc genhtml_branch_coverage=1 00:11:01.394 --rc genhtml_function_coverage=1 00:11:01.394 --rc genhtml_legend=1 00:11:01.394 --rc geninfo_all_blocks=1 00:11:01.394 --rc geninfo_unexecuted_blocks=1 00:11:01.394 00:11:01.394 ' 00:11:01.394 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:01.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.395 --rc genhtml_branch_coverage=1 00:11:01.395 --rc genhtml_function_coverage=1 00:11:01.395 --rc genhtml_legend=1 00:11:01.395 --rc geninfo_all_blocks=1 00:11:01.395 --rc geninfo_unexecuted_blocks=1 00:11:01.395 00:11:01.395 ' 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:01.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.395 --rc genhtml_branch_coverage=1 00:11:01.395 --rc genhtml_function_coverage=1 00:11:01.395 --rc genhtml_legend=1 00:11:01.395 --rc geninfo_all_blocks=1 00:11:01.395 --rc geninfo_unexecuted_blocks=1 00:11:01.395 00:11:01.395 ' 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:01.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.395 --rc genhtml_branch_coverage=1 00:11:01.395 --rc genhtml_function_coverage=1 00:11:01.395 --rc genhtml_legend=1 00:11:01.395 --rc geninfo_all_blocks=1 00:11:01.395 --rc geninfo_unexecuted_blocks=1 00:11:01.395 00:11:01.395 ' 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.395 09:01:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:03.931 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:03.931 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:03.931 Found net devices under 0000:09:00.0: cvl_0_0 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:03.931 Found net devices under 0000:09:00.1: cvl_0_1 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:03.931 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:03.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:11:03.932 00:11:03.932 --- 10.0.0.2 ping statistics --- 00:11:03.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.932 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:11:03.932 00:11:03.932 --- 10.0.0.1 ping statistics --- 00:11:03.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.932 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3277146 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3277146 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 3277146 ']' 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.932 [2024-11-20 09:01:40.603176] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:11:03.932 [2024-11-20 09:01:40.603268] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.932 [2024-11-20 09:01:40.679558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.932 [2024-11-20 09:01:40.742493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.932 [2024-11-20 09:01:40.742543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.932 [2024-11-20 09:01:40.742572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.932 [2024-11-20 09:01:40.742584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.932 [2024-11-20 09:01:40.742595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.932 [2024-11-20 09:01:40.744336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.932 [2024-11-20 09:01:40.744402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.932 [2024-11-20 09:01:40.744452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.932 [2024-11-20 09:01:40.744456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.932 [2024-11-20 09:01:40.900481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.932 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.933 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:03.933 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:03.933 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.933 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.190 [2024-11-20 09:01:40.970048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:04.190 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:06.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.326 rmmod nvme_tcp 00:11:18.326 rmmod nvme_fabrics 00:11:18.326 rmmod nvme_keyring 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3277146 ']' 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3277146 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3277146 ']' 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 3277146 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3277146 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3277146' 00:11:18.326 killing process with pid 3277146 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 3277146 00:11:18.326 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 3277146 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.326 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.228 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:20.228 00:11:20.228 real 0m19.271s 00:11:20.228 user 0m57.465s 00:11:20.228 sys 0m3.493s 00:11:20.228 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:20.228 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:20.228 ************************************ 00:11:20.228 END TEST nvmf_connect_disconnect 00:11:20.228 ************************************ 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.487 ************************************ 00:11:20.487 START TEST nvmf_multitarget 00:11:20.487 ************************************ 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:20.487 * Looking for test storage... 00:11:20.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:20.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.487 --rc genhtml_branch_coverage=1 00:11:20.487 --rc genhtml_function_coverage=1 00:11:20.487 --rc genhtml_legend=1 00:11:20.487 --rc geninfo_all_blocks=1 00:11:20.487 --rc geninfo_unexecuted_blocks=1 00:11:20.487 00:11:20.487 ' 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:20.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.487 --rc genhtml_branch_coverage=1 00:11:20.487 --rc genhtml_function_coverage=1 00:11:20.487 --rc genhtml_legend=1 00:11:20.487 --rc geninfo_all_blocks=1 00:11:20.487 --rc geninfo_unexecuted_blocks=1 00:11:20.487 00:11:20.487 ' 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:20.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.487 --rc genhtml_branch_coverage=1 00:11:20.487 --rc genhtml_function_coverage=1 00:11:20.487 --rc genhtml_legend=1 00:11:20.487 --rc geninfo_all_blocks=1 00:11:20.487 --rc geninfo_unexecuted_blocks=1 00:11:20.487 00:11:20.487 ' 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:20.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.487 --rc genhtml_branch_coverage=1 00:11:20.487 --rc genhtml_function_coverage=1 00:11:20.487 --rc genhtml_legend=1 00:11:20.487 --rc geninfo_all_blocks=1 00:11:20.487 --rc geninfo_unexecuted_blocks=1 00:11:20.487 00:11:20.487 ' 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.487 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.488 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.021 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:23.022 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:23.022 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:23.022 Found net devices under 0000:09:00.0: cvl_0_0 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:23.022 Found net devices under 0000:09:00.1: cvl_0_1 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.022 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:11:23.023 00:11:23.023 --- 10.0.0.2 ping statistics --- 00:11:23.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.023 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:11:23.023 00:11:23.023 --- 10.0.0.1 ping statistics --- 00:11:23.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.023 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3280910 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3280910 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 3280910 ']' 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.023 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.023 [2024-11-20 09:01:59.857953] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:11:23.023 [2024-11-20 09:01:59.858032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.023 [2024-11-20 09:01:59.933247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.023 [2024-11-20 09:01:59.993612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.023 [2024-11-20 09:01:59.993684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.023 [2024-11-20 09:01:59.993697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.023 [2024-11-20 09:01:59.993722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.023 [2024-11-20 09:01:59.993732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.023 [2024-11-20 09:01:59.995351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.023 [2024-11-20 09:01:59.995377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.023 [2024-11-20 09:01:59.995437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.023 [2024-11-20 09:01:59.995441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.281 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:23.281 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:11:23.281 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.281 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.281 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:23.282 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.282 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:23.282 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:23.282 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:23.282 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:23.282 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:23.539 "nvmf_tgt_1" 00:11:23.539 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:23.539 "nvmf_tgt_2" 00:11:23.539 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:23.539 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:23.797 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:23.797 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:23.797 true 00:11:23.797 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:24.055 true 00:11:24.055 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:24.055 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:24.055 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:24.055 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:24.055 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:24.055 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.055 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:24.055 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.055 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:24.055 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.055 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.055 rmmod nvme_tcp 00:11:24.055 rmmod nvme_fabrics 00:11:24.055 rmmod nvme_keyring 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3280910 ']' 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3280910 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 3280910 ']' 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 3280910 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3280910 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3280910' 00:11:24.311 killing process with pid 3280910 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 3280910 00:11:24.311 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 3280910 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.569 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.474 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.474 00:11:26.474 real 0m6.130s 00:11:26.474 user 0m7.209s 00:11:26.474 sys 0m2.109s 00:11:26.474 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.474 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.474 ************************************ 00:11:26.474 END TEST nvmf_multitarget 00:11:26.474 ************************************ 00:11:26.474 09:02:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:26.474 09:02:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:26.474 09:02:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.474 09:02:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.474 ************************************ 00:11:26.474 START TEST nvmf_rpc 00:11:26.474 ************************************ 00:11:26.474 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:26.733 * Looking for test storage... 00:11:26.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:26.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.733 --rc genhtml_branch_coverage=1 00:11:26.733 --rc genhtml_function_coverage=1 00:11:26.733 --rc genhtml_legend=1 00:11:26.733 --rc geninfo_all_blocks=1 00:11:26.733 --rc geninfo_unexecuted_blocks=1 00:11:26.733 00:11:26.733 ' 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:26.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.733 --rc genhtml_branch_coverage=1 00:11:26.733 --rc genhtml_function_coverage=1 00:11:26.733 --rc genhtml_legend=1 00:11:26.733 --rc geninfo_all_blocks=1 00:11:26.733 --rc geninfo_unexecuted_blocks=1 00:11:26.733 00:11:26.733 ' 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:26.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.733 --rc genhtml_branch_coverage=1 00:11:26.733 --rc genhtml_function_coverage=1 00:11:26.733 --rc genhtml_legend=1 00:11:26.733 --rc geninfo_all_blocks=1 00:11:26.733 --rc geninfo_unexecuted_blocks=1 00:11:26.733 00:11:26.733 ' 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:26.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.733 --rc genhtml_branch_coverage=1 00:11:26.733 --rc genhtml_function_coverage=1 00:11:26.733 --rc genhtml_legend=1 00:11:26.733 --rc geninfo_all_blocks=1 00:11:26.733 --rc geninfo_unexecuted_blocks=1 00:11:26.733 00:11:26.733 ' 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.733 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.734 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.267 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:29.268 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:29.268 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:29.268 Found net devices under 0000:09:00.0: cvl_0_0 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:29.268 Found net devices under 0000:09:00.1: cvl_0_1 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:11:29.268 00:11:29.268 --- 10.0.0.2 ping statistics --- 00:11:29.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.268 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:11:29.268 00:11:29.268 --- 10.0.0.1 ping statistics --- 00:11:29.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.268 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3283021 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3283021 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 3283021 ']' 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:29.268 09:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.268 [2024-11-20 09:02:05.984372] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:11:29.268 [2024-11-20 09:02:05.984442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.268 [2024-11-20 09:02:06.056250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.268 [2024-11-20 09:02:06.118207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.268 [2024-11-20 09:02:06.118259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.269 [2024-11-20 09:02:06.118288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.269 [2024-11-20 09:02:06.118299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.269 [2024-11-20 09:02:06.118316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.269 [2024-11-20 09:02:06.119938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.269 [2024-11-20 09:02:06.119994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.269 [2024-11-20 09:02:06.120017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.269 [2024-11-20 09:02:06.120020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:29.269 "tick_rate": 2700000000, 00:11:29.269 "poll_groups": [ 00:11:29.269 { 00:11:29.269 "name": "nvmf_tgt_poll_group_000", 00:11:29.269 "admin_qpairs": 0, 00:11:29.269 "io_qpairs": 0, 00:11:29.269 "current_admin_qpairs": 0, 00:11:29.269 "current_io_qpairs": 0, 00:11:29.269 "pending_bdev_io": 0, 00:11:29.269 "completed_nvme_io": 0, 00:11:29.269 "transports": [] 00:11:29.269 }, 00:11:29.269 { 00:11:29.269 "name": "nvmf_tgt_poll_group_001", 00:11:29.269 "admin_qpairs": 0, 00:11:29.269 "io_qpairs": 0, 00:11:29.269 "current_admin_qpairs": 0, 00:11:29.269 "current_io_qpairs": 0, 00:11:29.269 "pending_bdev_io": 0, 00:11:29.269 "completed_nvme_io": 0, 00:11:29.269 "transports": [] 00:11:29.269 }, 00:11:29.269 { 00:11:29.269 "name": "nvmf_tgt_poll_group_002", 00:11:29.269 "admin_qpairs": 0, 00:11:29.269 "io_qpairs": 0, 00:11:29.269 "current_admin_qpairs": 0, 00:11:29.269 "current_io_qpairs": 0, 00:11:29.269 "pending_bdev_io": 0, 00:11:29.269 "completed_nvme_io": 0, 00:11:29.269 "transports": [] 00:11:29.269 }, 00:11:29.269 { 00:11:29.269 "name": "nvmf_tgt_poll_group_003", 00:11:29.269 "admin_qpairs": 0, 00:11:29.269 "io_qpairs": 0, 00:11:29.269 "current_admin_qpairs": 0, 00:11:29.269 "current_io_qpairs": 0, 00:11:29.269 "pending_bdev_io": 0, 00:11:29.269 "completed_nvme_io": 0, 00:11:29.269 "transports": [] 00:11:29.269 } 00:11:29.269 ] 00:11:29.269 }' 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:29.269 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.527 [2024-11-20 09:02:06.345856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.527 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:29.527 "tick_rate": 2700000000, 00:11:29.527 "poll_groups": [ 00:11:29.527 { 00:11:29.527 "name": "nvmf_tgt_poll_group_000", 00:11:29.527 "admin_qpairs": 0, 00:11:29.527 "io_qpairs": 0, 00:11:29.527 "current_admin_qpairs": 0, 00:11:29.527 "current_io_qpairs": 0, 00:11:29.527 "pending_bdev_io": 0, 00:11:29.527 "completed_nvme_io": 0, 00:11:29.527 "transports": [ 00:11:29.527 { 00:11:29.527 "trtype": "TCP" 00:11:29.527 } 00:11:29.527 ] 00:11:29.527 }, 00:11:29.527 { 00:11:29.527 "name": "nvmf_tgt_poll_group_001", 00:11:29.527 "admin_qpairs": 0, 00:11:29.527 "io_qpairs": 0, 00:11:29.527 "current_admin_qpairs": 0, 00:11:29.527 "current_io_qpairs": 0, 00:11:29.527 "pending_bdev_io": 0, 00:11:29.527 "completed_nvme_io": 0, 00:11:29.527 "transports": [ 00:11:29.527 { 00:11:29.527 "trtype": "TCP" 00:11:29.527 } 00:11:29.527 ] 00:11:29.527 }, 00:11:29.527 { 00:11:29.527 "name": "nvmf_tgt_poll_group_002", 00:11:29.527 "admin_qpairs": 0, 00:11:29.527 "io_qpairs": 0, 00:11:29.527 "current_admin_qpairs": 0, 00:11:29.527 "current_io_qpairs": 0, 00:11:29.527 "pending_bdev_io": 0, 00:11:29.527 "completed_nvme_io": 0, 00:11:29.527 "transports": [ 00:11:29.527 { 00:11:29.527 "trtype": "TCP" 00:11:29.527 } 00:11:29.527 ] 00:11:29.527 }, 00:11:29.527 { 00:11:29.527 "name": "nvmf_tgt_poll_group_003", 00:11:29.527 "admin_qpairs": 0, 00:11:29.527 "io_qpairs": 0, 00:11:29.527 "current_admin_qpairs": 0, 00:11:29.527 "current_io_qpairs": 0, 00:11:29.527 "pending_bdev_io": 0, 00:11:29.527 "completed_nvme_io": 0, 00:11:29.527 "transports": [ 00:11:29.527 { 00:11:29.527 "trtype": "TCP" 00:11:29.527 } 00:11:29.527 ] 00:11:29.527 } 00:11:29.527 ] 00:11:29.527 }' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.528 Malloc1 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.528 [2024-11-20 09:02:06.495598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:29.528 [2024-11-20 09:02:06.518213] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:29.528 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:29.528 could not add new controller: failed to write to nvme-fabrics device 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.528 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.462 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.462 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:30.462 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.462 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:30.463 09:02:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.363 [2024-11-20 09:02:09.329457] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:32.363 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:32.363 could not add new controller: failed to write to nvme-fabrics device 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.363 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.297 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.297 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:33.297 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.297 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:33.297 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:35.195 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:35.195 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 [2024-11-20 09:02:12.164414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.196 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.129 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.129 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:36.129 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.129 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:36.129 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.114 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.114 [2024-11-20 09:02:15.000878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.114 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.114 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:38.115 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.115 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.115 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.115 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:38.115 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.115 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.115 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.115 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.046 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.046 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:39.046 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.046 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:39.046 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.941 [2024-11-20 09:02:17.843548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.941 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.502 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.502 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:41.502 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.502 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:41.502 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.029 [2024-11-20 09:02:20.630215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.029 09:02:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.595 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.595 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:44.595 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.595 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:44.595 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.495 [2024-11-20 09:02:23.467453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.495 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.429 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.429 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:47.429 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.429 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:47.429 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 [2024-11-20 09:02:26.215134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.329 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 [2024-11-20 09:02:26.263225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 [2024-11-20 09:02:26.311429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.330 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.588 [2024-11-20 09:02:26.359585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.588 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 [2024-11-20 09:02:26.407775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:49.589 "tick_rate": 2700000000, 00:11:49.589 "poll_groups": [ 00:11:49.589 { 00:11:49.589 "name": "nvmf_tgt_poll_group_000", 00:11:49.589 "admin_qpairs": 2, 00:11:49.589 "io_qpairs": 84, 00:11:49.589 "current_admin_qpairs": 0, 00:11:49.589 "current_io_qpairs": 0, 00:11:49.589 "pending_bdev_io": 0, 00:11:49.589 "completed_nvme_io": 232, 00:11:49.589 "transports": [ 00:11:49.589 { 00:11:49.589 "trtype": "TCP" 00:11:49.589 } 00:11:49.589 ] 00:11:49.589 }, 00:11:49.589 { 00:11:49.589 "name": "nvmf_tgt_poll_group_001", 00:11:49.589 "admin_qpairs": 2, 00:11:49.589 "io_qpairs": 84, 00:11:49.589 "current_admin_qpairs": 0, 00:11:49.589 "current_io_qpairs": 0, 00:11:49.589 "pending_bdev_io": 0, 00:11:49.589 "completed_nvme_io": 139, 00:11:49.589 "transports": [ 00:11:49.589 { 00:11:49.589 "trtype": "TCP" 00:11:49.589 } 00:11:49.589 ] 00:11:49.589 }, 00:11:49.589 { 00:11:49.589 "name": "nvmf_tgt_poll_group_002", 00:11:49.589 "admin_qpairs": 1, 00:11:49.589 "io_qpairs": 84, 00:11:49.589 "current_admin_qpairs": 0, 00:11:49.589 "current_io_qpairs": 0, 00:11:49.589 "pending_bdev_io": 0, 00:11:49.589 "completed_nvme_io": 179, 00:11:49.589 "transports": [ 00:11:49.589 { 00:11:49.589 "trtype": "TCP" 00:11:49.589 } 00:11:49.589 ] 00:11:49.589 }, 00:11:49.589 { 00:11:49.589 "name": "nvmf_tgt_poll_group_003", 00:11:49.589 "admin_qpairs": 2, 00:11:49.589 "io_qpairs": 84, 00:11:49.589 "current_admin_qpairs": 0, 00:11:49.589 "current_io_qpairs": 0, 00:11:49.589 "pending_bdev_io": 0, 00:11:49.589 "completed_nvme_io": 136, 00:11:49.589 "transports": [ 00:11:49.589 { 00:11:49.589 "trtype": "TCP" 00:11:49.589 } 00:11:49.589 ] 00:11:49.589 } 00:11:49.589 ] 00:11:49.589 }' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.589 rmmod nvme_tcp 00:11:49.589 rmmod nvme_fabrics 00:11:49.589 rmmod nvme_keyring 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3283021 ']' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3283021 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 3283021 ']' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 3283021 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:49.589 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3283021 00:11:49.846 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:49.846 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:49.846 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3283021' 00:11:49.846 killing process with pid 3283021 00:11:49.846 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 3283021 00:11:49.846 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 3283021 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.105 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.016 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:52.016 00:11:52.016 real 0m25.463s 00:11:52.016 user 1m22.494s 00:11:52.016 sys 0m4.133s 00:11:52.016 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:52.016 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.016 ************************************ 00:11:52.016 END TEST nvmf_rpc 00:11:52.016 ************************************ 00:11:52.016 09:02:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:52.016 09:02:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:52.016 09:02:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:52.016 09:02:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:52.016 ************************************ 00:11:52.016 START TEST nvmf_invalid 00:11:52.016 ************************************ 00:11:52.016 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:52.016 * Looking for test storage... 00:11:52.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.016 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:52.016 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:11:52.016 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:52.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.276 --rc genhtml_branch_coverage=1 00:11:52.276 --rc genhtml_function_coverage=1 00:11:52.276 --rc genhtml_legend=1 00:11:52.276 --rc geninfo_all_blocks=1 00:11:52.276 --rc geninfo_unexecuted_blocks=1 00:11:52.276 00:11:52.276 ' 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:52.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.276 --rc genhtml_branch_coverage=1 00:11:52.276 --rc genhtml_function_coverage=1 00:11:52.276 --rc genhtml_legend=1 00:11:52.276 --rc geninfo_all_blocks=1 00:11:52.276 --rc geninfo_unexecuted_blocks=1 00:11:52.276 00:11:52.276 ' 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:52.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.276 --rc genhtml_branch_coverage=1 00:11:52.276 --rc genhtml_function_coverage=1 00:11:52.276 --rc genhtml_legend=1 00:11:52.276 --rc geninfo_all_blocks=1 00:11:52.276 --rc geninfo_unexecuted_blocks=1 00:11:52.276 00:11:52.276 ' 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:52.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.276 --rc genhtml_branch_coverage=1 00:11:52.276 --rc genhtml_function_coverage=1 00:11:52.276 --rc genhtml_legend=1 00:11:52.276 --rc geninfo_all_blocks=1 00:11:52.276 --rc geninfo_unexecuted_blocks=1 00:11:52.276 00:11:52.276 ' 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.276 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.277 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:54.812 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:54.812 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:54.813 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:54.813 Found net devices under 0000:09:00.0: cvl_0_0 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:54.813 Found net devices under 0000:09:00.1: cvl_0_1 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:54.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:11:54.813 00:11:54.813 --- 10.0.0.2 ping statistics --- 00:11:54.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.813 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:11:54.813 00:11:54.813 --- 10.0.0.1 ping statistics --- 00:11:54.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.813 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.813 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3287525 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3287525 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 3287525 ']' 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.814 [2024-11-20 09:02:31.547536] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:11:54.814 [2024-11-20 09:02:31.547633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.814 [2024-11-20 09:02:31.621716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.814 [2024-11-20 09:02:31.682667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.814 [2024-11-20 09:02:31.682719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.814 [2024-11-20 09:02:31.682748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.814 [2024-11-20 09:02:31.682759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.814 [2024-11-20 09:02:31.682770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.814 [2024-11-20 09:02:31.684339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.814 [2024-11-20 09:02:31.684405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.814 [2024-11-20 09:02:31.684455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.814 [2024-11-20 09:02:31.684459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:54.814 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11971 00:11:55.072 [2024-11-20 09:02:32.087703] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:55.329 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:55.329 { 00:11:55.329 "nqn": "nqn.2016-06.io.spdk:cnode11971", 00:11:55.329 "tgt_name": "foobar", 00:11:55.329 "method": "nvmf_create_subsystem", 00:11:55.329 "req_id": 1 00:11:55.329 } 00:11:55.329 Got JSON-RPC error response 00:11:55.329 response: 00:11:55.329 { 00:11:55.329 "code": -32603, 00:11:55.329 "message": "Unable to find target foobar" 00:11:55.329 }' 00:11:55.329 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:55.329 { 00:11:55.329 "nqn": "nqn.2016-06.io.spdk:cnode11971", 00:11:55.329 "tgt_name": "foobar", 00:11:55.329 "method": "nvmf_create_subsystem", 00:11:55.329 "req_id": 1 00:11:55.329 } 00:11:55.329 Got JSON-RPC error response 00:11:55.329 response: 00:11:55.329 { 00:11:55.329 "code": -32603, 00:11:55.329 "message": "Unable to find target foobar" 00:11:55.329 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:55.329 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:55.329 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1358 00:11:55.329 [2024-11-20 09:02:32.348623] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1358: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:55.587 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:55.587 { 00:11:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1358", 00:11:55.587 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:55.587 "method": "nvmf_create_subsystem", 00:11:55.587 "req_id": 1 00:11:55.587 } 00:11:55.587 Got JSON-RPC error response 00:11:55.587 response: 00:11:55.587 { 00:11:55.587 "code": -32602, 00:11:55.587 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:55.587 }' 00:11:55.587 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:55.587 { 00:11:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1358", 00:11:55.587 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:55.587 "method": "nvmf_create_subsystem", 00:11:55.587 "req_id": 1 00:11:55.587 } 00:11:55.587 Got JSON-RPC error response 00:11:55.587 response: 00:11:55.587 { 00:11:55.587 "code": -32602, 00:11:55.587 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:55.587 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:55.587 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:55.587 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15809 00:11:55.846 [2024-11-20 09:02:32.617477] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15809: invalid model number 'SPDK_Controller' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:55.846 { 00:11:55.846 "nqn": "nqn.2016-06.io.spdk:cnode15809", 00:11:55.846 "model_number": "SPDK_Controller\u001f", 00:11:55.846 "method": "nvmf_create_subsystem", 00:11:55.846 "req_id": 1 00:11:55.846 } 00:11:55.846 Got JSON-RPC error response 00:11:55.846 response: 00:11:55.846 { 00:11:55.846 "code": -32602, 00:11:55.846 "message": "Invalid MN SPDK_Controller\u001f" 00:11:55.846 }' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:55.846 { 00:11:55.846 "nqn": "nqn.2016-06.io.spdk:cnode15809", 00:11:55.846 "model_number": "SPDK_Controller\u001f", 00:11:55.846 "method": "nvmf_create_subsystem", 00:11:55.846 "req_id": 1 00:11:55.846 } 00:11:55.846 Got JSON-RPC error response 00:11:55.846 response: 00:11:55.846 { 00:11:55.846 "code": -32602, 00:11:55.846 "message": "Invalid MN SPDK_Controller\u001f" 00:11:55.846 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:55.846 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ S == \- ]] 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'S$}HDN_2(8W! ]m07'\''sn6' 00:11:55.847 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'S$}HDN_2(8W! ]m07'\''sn6' nqn.2016-06.io.spdk:cnode6539 00:11:56.106 [2024-11-20 09:02:32.986755] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6539: invalid serial number 'S$}HDN_2(8W! ]m07'sn6' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:56.106 { 00:11:56.106 "nqn": "nqn.2016-06.io.spdk:cnode6539", 00:11:56.106 "serial_number": "S$}HDN_2(8W! ]m07'\''sn6", 00:11:56.106 "method": "nvmf_create_subsystem", 00:11:56.106 "req_id": 1 00:11:56.106 } 00:11:56.106 Got JSON-RPC error response 00:11:56.106 response: 00:11:56.106 { 00:11:56.106 "code": -32602, 00:11:56.106 "message": "Invalid SN S$}HDN_2(8W! ]m07'\''sn6" 00:11:56.106 }' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:56.106 { 00:11:56.106 "nqn": "nqn.2016-06.io.spdk:cnode6539", 00:11:56.106 "serial_number": "S$}HDN_2(8W! ]m07'sn6", 00:11:56.106 "method": "nvmf_create_subsystem", 00:11:56.106 "req_id": 1 00:11:56.106 } 00:11:56.106 Got JSON-RPC error response 00:11:56.106 response: 00:11:56.106 { 00:11:56.106 "code": -32602, 00:11:56.106 "message": "Invalid SN S$}HDN_2(8W! ]m07'sn6" 00:11:56.106 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.106 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:56.107 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:56.108 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:56.108 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.108 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.108 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ V == \- ]] 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'VZkQm//[3sxNLE*aM^z~916h54G"qH'\''4U"hAhxQKm' 00:11:56.366 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'VZkQm//[3sxNLE*aM^z~916h54G"qH'\''4U"hAhxQKm' nqn.2016-06.io.spdk:cnode13626 00:11:56.624 [2024-11-20 09:02:33.404124] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13626: invalid model number 'VZkQm//[3sxNLE*aM^z~916h54G"qH'4U"hAhxQKm' 00:11:56.624 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:56.624 { 00:11:56.624 "nqn": "nqn.2016-06.io.spdk:cnode13626", 00:11:56.624 "model_number": "VZkQm//[3sxNLE*aM^z~916h54G\"qH'\''4U\"hAhxQKm", 00:11:56.624 "method": "nvmf_create_subsystem", 00:11:56.624 "req_id": 1 00:11:56.624 } 00:11:56.624 Got JSON-RPC error response 00:11:56.624 response: 00:11:56.624 { 00:11:56.624 "code": -32602, 00:11:56.624 "message": "Invalid MN VZkQm//[3sxNLE*aM^z~916h54G\"qH'\''4U\"hAhxQKm" 00:11:56.624 }' 00:11:56.624 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:56.624 { 00:11:56.624 "nqn": "nqn.2016-06.io.spdk:cnode13626", 00:11:56.624 "model_number": "VZkQm//[3sxNLE*aM^z~916h54G\"qH'4U\"hAhxQKm", 00:11:56.624 "method": "nvmf_create_subsystem", 00:11:56.624 "req_id": 1 00:11:56.624 } 00:11:56.624 Got JSON-RPC error response 00:11:56.624 response: 00:11:56.624 { 00:11:56.624 "code": -32602, 00:11:56.624 "message": "Invalid MN VZkQm//[3sxNLE*aM^z~916h54G\"qH'4U\"hAhxQKm" 00:11:56.624 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:56.624 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:56.882 [2024-11-20 09:02:33.661049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.882 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:57.140 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:57.140 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:57.140 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:57.140 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:57.140 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:57.398 [2024-11-20 09:02:34.214875] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:57.398 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:57.398 { 00:11:57.398 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:57.398 "listen_address": { 00:11:57.398 "trtype": "tcp", 00:11:57.398 "traddr": "", 00:11:57.398 "trsvcid": "4421" 00:11:57.398 }, 00:11:57.398 "method": "nvmf_subsystem_remove_listener", 00:11:57.398 "req_id": 1 00:11:57.398 } 00:11:57.398 Got JSON-RPC error response 00:11:57.398 response: 00:11:57.398 { 00:11:57.398 "code": -32602, 00:11:57.398 "message": "Invalid parameters" 00:11:57.398 }' 00:11:57.398 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:57.398 { 00:11:57.398 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:57.398 "listen_address": { 00:11:57.398 "trtype": "tcp", 00:11:57.398 "traddr": "", 00:11:57.398 "trsvcid": "4421" 00:11:57.398 }, 00:11:57.398 "method": "nvmf_subsystem_remove_listener", 00:11:57.398 "req_id": 1 00:11:57.398 } 00:11:57.398 Got JSON-RPC error response 00:11:57.398 response: 00:11:57.398 { 00:11:57.398 "code": -32602, 00:11:57.398 "message": "Invalid parameters" 00:11:57.398 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:57.398 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12701 -i 0 00:11:57.656 [2024-11-20 09:02:34.491787] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12701: invalid cntlid range [0-65519] 00:11:57.656 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:57.656 { 00:11:57.656 "nqn": "nqn.2016-06.io.spdk:cnode12701", 00:11:57.656 "min_cntlid": 0, 00:11:57.656 "method": "nvmf_create_subsystem", 00:11:57.656 "req_id": 1 00:11:57.656 } 00:11:57.656 Got JSON-RPC error response 00:11:57.656 response: 00:11:57.656 { 00:11:57.656 "code": -32602, 00:11:57.656 "message": "Invalid cntlid range [0-65519]" 00:11:57.656 }' 00:11:57.656 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:57.656 { 00:11:57.656 "nqn": "nqn.2016-06.io.spdk:cnode12701", 00:11:57.656 "min_cntlid": 0, 00:11:57.656 "method": "nvmf_create_subsystem", 00:11:57.656 "req_id": 1 00:11:57.656 } 00:11:57.656 Got JSON-RPC error response 00:11:57.656 response: 00:11:57.656 { 00:11:57.656 "code": -32602, 00:11:57.656 "message": "Invalid cntlid range [0-65519]" 00:11:57.656 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:57.656 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8254 -i 65520 00:11:57.913 [2024-11-20 09:02:34.756709] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8254: invalid cntlid range [65520-65519] 00:11:57.913 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:57.913 { 00:11:57.913 "nqn": "nqn.2016-06.io.spdk:cnode8254", 00:11:57.913 "min_cntlid": 65520, 00:11:57.913 "method": "nvmf_create_subsystem", 00:11:57.913 "req_id": 1 00:11:57.913 } 00:11:57.913 Got JSON-RPC error response 00:11:57.913 response: 00:11:57.913 { 00:11:57.913 "code": -32602, 00:11:57.913 "message": "Invalid cntlid range [65520-65519]" 00:11:57.913 }' 00:11:57.913 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:57.913 { 00:11:57.913 "nqn": "nqn.2016-06.io.spdk:cnode8254", 00:11:57.913 "min_cntlid": 65520, 00:11:57.913 "method": "nvmf_create_subsystem", 00:11:57.913 "req_id": 1 00:11:57.913 } 00:11:57.913 Got JSON-RPC error response 00:11:57.913 response: 00:11:57.913 { 00:11:57.913 "code": -32602, 00:11:57.913 "message": "Invalid cntlid range [65520-65519]" 00:11:57.913 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:57.913 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6174 -I 0 00:11:58.171 [2024-11-20 09:02:35.017516] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6174: invalid cntlid range [1-0] 00:11:58.171 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:58.171 { 00:11:58.171 "nqn": "nqn.2016-06.io.spdk:cnode6174", 00:11:58.171 "max_cntlid": 0, 00:11:58.171 "method": "nvmf_create_subsystem", 00:11:58.171 "req_id": 1 00:11:58.171 } 00:11:58.171 Got JSON-RPC error response 00:11:58.171 response: 00:11:58.171 { 00:11:58.171 "code": -32602, 00:11:58.171 "message": "Invalid cntlid range [1-0]" 00:11:58.171 }' 00:11:58.171 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:58.171 { 00:11:58.171 "nqn": "nqn.2016-06.io.spdk:cnode6174", 00:11:58.171 "max_cntlid": 0, 00:11:58.171 "method": "nvmf_create_subsystem", 00:11:58.171 "req_id": 1 00:11:58.171 } 00:11:58.171 Got JSON-RPC error response 00:11:58.171 response: 00:11:58.171 { 00:11:58.171 "code": -32602, 00:11:58.171 "message": "Invalid cntlid range [1-0]" 00:11:58.171 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:58.171 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2280 -I 65520 00:11:58.429 [2024-11-20 09:02:35.290402] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2280: invalid cntlid range [1-65520] 00:11:58.429 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:58.429 { 00:11:58.429 "nqn": "nqn.2016-06.io.spdk:cnode2280", 00:11:58.429 "max_cntlid": 65520, 00:11:58.429 "method": "nvmf_create_subsystem", 00:11:58.429 "req_id": 1 00:11:58.429 } 00:11:58.429 Got JSON-RPC error response 00:11:58.429 response: 00:11:58.429 { 00:11:58.429 "code": -32602, 00:11:58.429 "message": "Invalid cntlid range [1-65520]" 00:11:58.429 }' 00:11:58.429 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:58.429 { 00:11:58.429 "nqn": "nqn.2016-06.io.spdk:cnode2280", 00:11:58.429 "max_cntlid": 65520, 00:11:58.429 "method": "nvmf_create_subsystem", 00:11:58.429 "req_id": 1 00:11:58.429 } 00:11:58.429 Got JSON-RPC error response 00:11:58.429 response: 00:11:58.429 { 00:11:58.429 "code": -32602, 00:11:58.429 "message": "Invalid cntlid range [1-65520]" 00:11:58.429 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:58.429 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3678 -i 6 -I 5 00:11:58.687 [2024-11-20 09:02:35.571351] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3678: invalid cntlid range [6-5] 00:11:58.687 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:58.687 { 00:11:58.687 "nqn": "nqn.2016-06.io.spdk:cnode3678", 00:11:58.687 "min_cntlid": 6, 00:11:58.687 "max_cntlid": 5, 00:11:58.687 "method": "nvmf_create_subsystem", 00:11:58.687 "req_id": 1 00:11:58.687 } 00:11:58.687 Got JSON-RPC error response 00:11:58.687 response: 00:11:58.687 { 00:11:58.687 "code": -32602, 00:11:58.687 "message": "Invalid cntlid range [6-5]" 00:11:58.687 }' 00:11:58.687 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:58.687 { 00:11:58.687 "nqn": "nqn.2016-06.io.spdk:cnode3678", 00:11:58.687 "min_cntlid": 6, 00:11:58.687 "max_cntlid": 5, 00:11:58.687 "method": "nvmf_create_subsystem", 00:11:58.687 "req_id": 1 00:11:58.687 } 00:11:58.687 Got JSON-RPC error response 00:11:58.687 response: 00:11:58.687 { 00:11:58.687 "code": -32602, 00:11:58.687 "message": "Invalid cntlid range [6-5]" 00:11:58.687 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:58.687 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:58.687 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:58.687 { 00:11:58.687 "name": "foobar", 00:11:58.687 "method": "nvmf_delete_target", 00:11:58.687 "req_id": 1 00:11:58.687 } 00:11:58.687 Got JSON-RPC error response 00:11:58.687 response: 00:11:58.687 { 00:11:58.687 "code": -32602, 00:11:58.687 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:58.687 }' 00:11:58.687 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:58.687 { 00:11:58.687 "name": "foobar", 00:11:58.687 "method": "nvmf_delete_target", 00:11:58.687 "req_id": 1 00:11:58.687 } 00:11:58.687 Got JSON-RPC error response 00:11:58.687 response: 00:11:58.687 { 00:11:58.687 "code": -32602, 00:11:58.687 "message": "The specified target doesn't exist, cannot delete it." 00:11:58.687 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:58.687 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:58.687 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:58.687 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.687 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:58.945 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.945 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:58.945 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.945 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.945 rmmod nvme_tcp 00:11:58.945 rmmod nvme_fabrics 00:11:58.945 rmmod nvme_keyring 00:11:58.945 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.945 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:58.945 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3287525 ']' 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3287525 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 3287525 ']' 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 3287525 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3287525 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3287525' 00:11:58.946 killing process with pid 3287525 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 3287525 00:11:58.946 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 3287525 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.204 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.108 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.108 00:12:01.108 real 0m9.100s 00:12:01.108 user 0m21.408s 00:12:01.108 sys 0m2.608s 00:12:01.108 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.108 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:01.108 ************************************ 00:12:01.108 END TEST nvmf_invalid 00:12:01.108 ************************************ 00:12:01.108 09:02:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:01.108 09:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:01.108 09:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.108 09:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.108 ************************************ 00:12:01.108 START TEST nvmf_connect_stress 00:12:01.108 ************************************ 00:12:01.108 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:01.367 * Looking for test storage... 00:12:01.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.367 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:01.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.368 --rc genhtml_branch_coverage=1 00:12:01.368 --rc genhtml_function_coverage=1 00:12:01.368 --rc genhtml_legend=1 00:12:01.368 --rc geninfo_all_blocks=1 00:12:01.368 --rc geninfo_unexecuted_blocks=1 00:12:01.368 00:12:01.368 ' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:01.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.368 --rc genhtml_branch_coverage=1 00:12:01.368 --rc genhtml_function_coverage=1 00:12:01.368 --rc genhtml_legend=1 00:12:01.368 --rc geninfo_all_blocks=1 00:12:01.368 --rc geninfo_unexecuted_blocks=1 00:12:01.368 00:12:01.368 ' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:01.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.368 --rc genhtml_branch_coverage=1 00:12:01.368 --rc genhtml_function_coverage=1 00:12:01.368 --rc genhtml_legend=1 00:12:01.368 --rc geninfo_all_blocks=1 00:12:01.368 --rc geninfo_unexecuted_blocks=1 00:12:01.368 00:12:01.368 ' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:01.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.368 --rc genhtml_branch_coverage=1 00:12:01.368 --rc genhtml_function_coverage=1 00:12:01.368 --rc genhtml_legend=1 00:12:01.368 --rc geninfo_all_blocks=1 00:12:01.368 --rc geninfo_unexecuted_blocks=1 00:12:01.368 00:12:01.368 ' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.368 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.910 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.910 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.910 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.910 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.910 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.910 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:03.911 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:03.911 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:03.911 Found net devices under 0000:09:00.0: cvl_0_0 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:03.911 Found net devices under 0000:09:00.1: cvl_0_1 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:12:03.911 00:12:03.911 --- 10.0.0.2 ping statistics --- 00:12:03.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.911 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:12:03.911 00:12:03.911 --- 10.0.0.1 ping statistics --- 00:12:03.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.911 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.911 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3290170 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3290170 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 3290170 ']' 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.912 [2024-11-20 09:02:40.609942] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:12:03.912 [2024-11-20 09:02:40.610038] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.912 [2024-11-20 09:02:40.680744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:03.912 [2024-11-20 09:02:40.736367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.912 [2024-11-20 09:02:40.736439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.912 [2024-11-20 09:02:40.736467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.912 [2024-11-20 09:02:40.736479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.912 [2024-11-20 09:02:40.736489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.912 [2024-11-20 09:02:40.738130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.912 [2024-11-20 09:02:40.738195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.912 [2024-11-20 09:02:40.738190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.912 [2024-11-20 09:02:40.886461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.912 [2024-11-20 09:02:40.903891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.912 NULL1 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3290312 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:03.912 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.253 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.554 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.554 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:04.554 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.554 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.554 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.812 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.812 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:04.812 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.812 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.812 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.069 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.069 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:05.069 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.069 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.069 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.327 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.327 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:05.327 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.327 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.327 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.584 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.584 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:05.584 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.584 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.584 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.150 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.150 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:06.150 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.150 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.150 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.407 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.407 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:06.407 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.407 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.407 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.665 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.665 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:06.665 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.665 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.665 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.923 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.923 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:06.923 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.923 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.923 09:02:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.181 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.181 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:07.181 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.181 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.181 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.746 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.746 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:07.746 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.746 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.746 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.004 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.004 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:08.004 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.004 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.004 09:02:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.262 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.262 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:08.262 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.262 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.262 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.520 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.520 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:08.520 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.520 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.520 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.778 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.778 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:08.778 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.778 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.778 09:02:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.343 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.343 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:09.343 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.343 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.343 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.601 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.601 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:09.601 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.601 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.601 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.859 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.859 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:09.859 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.859 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.859 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.116 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.116 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:10.116 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.116 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.116 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.373 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.373 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:10.373 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.374 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.374 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.938 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.938 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:10.938 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.938 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.938 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.196 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.196 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:11.196 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.196 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.196 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.453 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.453 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:11.453 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.453 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.453 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.711 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.711 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:11.711 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.711 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.711 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.275 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.275 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:12.275 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.275 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.275 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.533 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:12.533 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.533 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.533 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.791 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.791 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:12.791 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.791 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.791 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.050 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.050 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:13.050 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.050 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.050 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.307 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.307 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:13.307 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.308 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.308 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.873 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.873 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:13.873 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.873 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.873 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.131 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.131 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:14.131 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.131 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.131 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.131 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3290312 00:12:14.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3290312) - No such process 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3290312 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.388 rmmod nvme_tcp 00:12:14.388 rmmod nvme_fabrics 00:12:14.388 rmmod nvme_keyring 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.388 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3290170 ']' 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3290170 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 3290170 ']' 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 3290170 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3290170 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3290170' 00:12:14.389 killing process with pid 3290170 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 3290170 00:12:14.389 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 3290170 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.648 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.183 00:12:17.183 real 0m15.477s 00:12:17.183 user 0m38.748s 00:12:17.183 sys 0m5.793s 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.183 ************************************ 00:12:17.183 END TEST nvmf_connect_stress 00:12:17.183 ************************************ 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.183 ************************************ 00:12:17.183 START TEST nvmf_fused_ordering 00:12:17.183 ************************************ 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:17.183 * Looking for test storage... 00:12:17.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:17.183 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:17.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.184 --rc genhtml_branch_coverage=1 00:12:17.184 --rc genhtml_function_coverage=1 00:12:17.184 --rc genhtml_legend=1 00:12:17.184 --rc geninfo_all_blocks=1 00:12:17.184 --rc geninfo_unexecuted_blocks=1 00:12:17.184 00:12:17.184 ' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:17.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.184 --rc genhtml_branch_coverage=1 00:12:17.184 --rc genhtml_function_coverage=1 00:12:17.184 --rc genhtml_legend=1 00:12:17.184 --rc geninfo_all_blocks=1 00:12:17.184 --rc geninfo_unexecuted_blocks=1 00:12:17.184 00:12:17.184 ' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:17.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.184 --rc genhtml_branch_coverage=1 00:12:17.184 --rc genhtml_function_coverage=1 00:12:17.184 --rc genhtml_legend=1 00:12:17.184 --rc geninfo_all_blocks=1 00:12:17.184 --rc geninfo_unexecuted_blocks=1 00:12:17.184 00:12:17.184 ' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:17.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.184 --rc genhtml_branch_coverage=1 00:12:17.184 --rc genhtml_function_coverage=1 00:12:17.184 --rc genhtml_legend=1 00:12:17.184 --rc geninfo_all_blocks=1 00:12:17.184 --rc geninfo_unexecuted_blocks=1 00:12:17.184 00:12:17.184 ' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.184 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:19.087 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:19.087 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:19.087 Found net devices under 0000:09:00.0: cvl_0_0 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.087 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:19.088 Found net devices under 0000:09:00.1: cvl_0_1 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:19.088 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:19.088 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:19.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:12:19.347 00:12:19.347 --- 10.0.0.2 ping statistics --- 00:12:19.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.347 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:12:19.347 00:12:19.347 --- 10.0.0.1 ping statistics --- 00:12:19.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.347 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3293474 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3293474 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 3293474 ']' 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:19.347 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.347 [2024-11-20 09:02:56.219221] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:12:19.347 [2024-11-20 09:02:56.219325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.347 [2024-11-20 09:02:56.290316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.347 [2024-11-20 09:02:56.343059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.347 [2024-11-20 09:02:56.343133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.347 [2024-11-20 09:02:56.343146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.347 [2024-11-20 09:02:56.343157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.347 [2024-11-20 09:02:56.343180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.347 [2024-11-20 09:02:56.343796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.605 [2024-11-20 09:02:56.482045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.605 [2024-11-20 09:02:56.498225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.605 NULL1 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.605 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:19.605 [2024-11-20 09:02:56.541603] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:12:19.605 [2024-11-20 09:02:56.541639] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293498 ] 00:12:20.170 Attached to nqn.2016-06.io.spdk:cnode1 00:12:20.170 Namespace ID: 1 size: 1GB 00:12:20.170 fused_ordering(0) 00:12:20.170 fused_ordering(1) 00:12:20.170 fused_ordering(2) 00:12:20.170 fused_ordering(3) 00:12:20.170 fused_ordering(4) 00:12:20.170 fused_ordering(5) 00:12:20.170 fused_ordering(6) 00:12:20.170 fused_ordering(7) 00:12:20.170 fused_ordering(8) 00:12:20.170 fused_ordering(9) 00:12:20.170 fused_ordering(10) 00:12:20.170 fused_ordering(11) 00:12:20.170 fused_ordering(12) 00:12:20.170 fused_ordering(13) 00:12:20.170 fused_ordering(14) 00:12:20.170 fused_ordering(15) 00:12:20.170 fused_ordering(16) 00:12:20.170 fused_ordering(17) 00:12:20.170 fused_ordering(18) 00:12:20.170 fused_ordering(19) 00:12:20.170 fused_ordering(20) 00:12:20.170 fused_ordering(21) 00:12:20.170 fused_ordering(22) 00:12:20.170 fused_ordering(23) 00:12:20.170 fused_ordering(24) 00:12:20.170 fused_ordering(25) 00:12:20.170 fused_ordering(26) 00:12:20.170 fused_ordering(27) 00:12:20.170 fused_ordering(28) 00:12:20.170 fused_ordering(29) 00:12:20.170 fused_ordering(30) 00:12:20.170 fused_ordering(31) 00:12:20.170 fused_ordering(32) 00:12:20.170 fused_ordering(33) 00:12:20.170 fused_ordering(34) 00:12:20.170 fused_ordering(35) 00:12:20.170 fused_ordering(36) 00:12:20.170 fused_ordering(37) 00:12:20.170 fused_ordering(38) 00:12:20.170 fused_ordering(39) 00:12:20.170 fused_ordering(40) 00:12:20.170 fused_ordering(41) 00:12:20.170 fused_ordering(42) 00:12:20.170 fused_ordering(43) 00:12:20.170 fused_ordering(44) 00:12:20.170 fused_ordering(45) 00:12:20.170 fused_ordering(46) 00:12:20.170 fused_ordering(47) 00:12:20.170 fused_ordering(48) 00:12:20.170 fused_ordering(49) 00:12:20.170 fused_ordering(50) 00:12:20.170 fused_ordering(51) 00:12:20.170 fused_ordering(52) 00:12:20.170 fused_ordering(53) 00:12:20.170 fused_ordering(54) 00:12:20.170 fused_ordering(55) 00:12:20.170 fused_ordering(56) 00:12:20.170 fused_ordering(57) 00:12:20.170 fused_ordering(58) 00:12:20.170 fused_ordering(59) 00:12:20.170 fused_ordering(60) 00:12:20.170 fused_ordering(61) 00:12:20.170 fused_ordering(62) 00:12:20.170 fused_ordering(63) 00:12:20.170 fused_ordering(64) 00:12:20.170 fused_ordering(65) 00:12:20.170 fused_ordering(66) 00:12:20.170 fused_ordering(67) 00:12:20.170 fused_ordering(68) 00:12:20.170 fused_ordering(69) 00:12:20.170 fused_ordering(70) 00:12:20.170 fused_ordering(71) 00:12:20.170 fused_ordering(72) 00:12:20.170 fused_ordering(73) 00:12:20.170 fused_ordering(74) 00:12:20.170 fused_ordering(75) 00:12:20.170 fused_ordering(76) 00:12:20.170 fused_ordering(77) 00:12:20.170 fused_ordering(78) 00:12:20.170 fused_ordering(79) 00:12:20.170 fused_ordering(80) 00:12:20.170 fused_ordering(81) 00:12:20.170 fused_ordering(82) 00:12:20.170 fused_ordering(83) 00:12:20.170 fused_ordering(84) 00:12:20.170 fused_ordering(85) 00:12:20.170 fused_ordering(86) 00:12:20.170 fused_ordering(87) 00:12:20.170 fused_ordering(88) 00:12:20.170 fused_ordering(89) 00:12:20.170 fused_ordering(90) 00:12:20.170 fused_ordering(91) 00:12:20.170 fused_ordering(92) 00:12:20.170 fused_ordering(93) 00:12:20.170 fused_ordering(94) 00:12:20.170 fused_ordering(95) 00:12:20.170 fused_ordering(96) 00:12:20.170 fused_ordering(97) 00:12:20.170 fused_ordering(98) 00:12:20.170 fused_ordering(99) 00:12:20.170 fused_ordering(100) 00:12:20.170 fused_ordering(101) 00:12:20.170 fused_ordering(102) 00:12:20.170 fused_ordering(103) 00:12:20.170 fused_ordering(104) 00:12:20.170 fused_ordering(105) 00:12:20.170 fused_ordering(106) 00:12:20.170 fused_ordering(107) 00:12:20.170 fused_ordering(108) 00:12:20.170 fused_ordering(109) 00:12:20.170 fused_ordering(110) 00:12:20.170 fused_ordering(111) 00:12:20.170 fused_ordering(112) 00:12:20.170 fused_ordering(113) 00:12:20.170 fused_ordering(114) 00:12:20.170 fused_ordering(115) 00:12:20.170 fused_ordering(116) 00:12:20.170 fused_ordering(117) 00:12:20.170 fused_ordering(118) 00:12:20.170 fused_ordering(119) 00:12:20.170 fused_ordering(120) 00:12:20.170 fused_ordering(121) 00:12:20.170 fused_ordering(122) 00:12:20.170 fused_ordering(123) 00:12:20.170 fused_ordering(124) 00:12:20.170 fused_ordering(125) 00:12:20.170 fused_ordering(126) 00:12:20.170 fused_ordering(127) 00:12:20.170 fused_ordering(128) 00:12:20.170 fused_ordering(129) 00:12:20.170 fused_ordering(130) 00:12:20.170 fused_ordering(131) 00:12:20.170 fused_ordering(132) 00:12:20.170 fused_ordering(133) 00:12:20.170 fused_ordering(134) 00:12:20.170 fused_ordering(135) 00:12:20.170 fused_ordering(136) 00:12:20.170 fused_ordering(137) 00:12:20.170 fused_ordering(138) 00:12:20.170 fused_ordering(139) 00:12:20.170 fused_ordering(140) 00:12:20.170 fused_ordering(141) 00:12:20.170 fused_ordering(142) 00:12:20.170 fused_ordering(143) 00:12:20.170 fused_ordering(144) 00:12:20.170 fused_ordering(145) 00:12:20.170 fused_ordering(146) 00:12:20.170 fused_ordering(147) 00:12:20.170 fused_ordering(148) 00:12:20.170 fused_ordering(149) 00:12:20.170 fused_ordering(150) 00:12:20.170 fused_ordering(151) 00:12:20.170 fused_ordering(152) 00:12:20.170 fused_ordering(153) 00:12:20.170 fused_ordering(154) 00:12:20.170 fused_ordering(155) 00:12:20.170 fused_ordering(156) 00:12:20.170 fused_ordering(157) 00:12:20.170 fused_ordering(158) 00:12:20.171 fused_ordering(159) 00:12:20.171 fused_ordering(160) 00:12:20.171 fused_ordering(161) 00:12:20.171 fused_ordering(162) 00:12:20.171 fused_ordering(163) 00:12:20.171 fused_ordering(164) 00:12:20.171 fused_ordering(165) 00:12:20.171 fused_ordering(166) 00:12:20.171 fused_ordering(167) 00:12:20.171 fused_ordering(168) 00:12:20.171 fused_ordering(169) 00:12:20.171 fused_ordering(170) 00:12:20.171 fused_ordering(171) 00:12:20.171 fused_ordering(172) 00:12:20.171 fused_ordering(173) 00:12:20.171 fused_ordering(174) 00:12:20.171 fused_ordering(175) 00:12:20.171 fused_ordering(176) 00:12:20.171 fused_ordering(177) 00:12:20.171 fused_ordering(178) 00:12:20.171 fused_ordering(179) 00:12:20.171 fused_ordering(180) 00:12:20.171 fused_ordering(181) 00:12:20.171 fused_ordering(182) 00:12:20.171 fused_ordering(183) 00:12:20.171 fused_ordering(184) 00:12:20.171 fused_ordering(185) 00:12:20.171 fused_ordering(186) 00:12:20.171 fused_ordering(187) 00:12:20.171 fused_ordering(188) 00:12:20.171 fused_ordering(189) 00:12:20.171 fused_ordering(190) 00:12:20.171 fused_ordering(191) 00:12:20.171 fused_ordering(192) 00:12:20.171 fused_ordering(193) 00:12:20.171 fused_ordering(194) 00:12:20.171 fused_ordering(195) 00:12:20.171 fused_ordering(196) 00:12:20.171 fused_ordering(197) 00:12:20.171 fused_ordering(198) 00:12:20.171 fused_ordering(199) 00:12:20.171 fused_ordering(200) 00:12:20.171 fused_ordering(201) 00:12:20.171 fused_ordering(202) 00:12:20.171 fused_ordering(203) 00:12:20.171 fused_ordering(204) 00:12:20.171 fused_ordering(205) 00:12:20.429 fused_ordering(206) 00:12:20.429 fused_ordering(207) 00:12:20.429 fused_ordering(208) 00:12:20.429 fused_ordering(209) 00:12:20.429 fused_ordering(210) 00:12:20.429 fused_ordering(211) 00:12:20.429 fused_ordering(212) 00:12:20.429 fused_ordering(213) 00:12:20.429 fused_ordering(214) 00:12:20.429 fused_ordering(215) 00:12:20.429 fused_ordering(216) 00:12:20.429 fused_ordering(217) 00:12:20.429 fused_ordering(218) 00:12:20.429 fused_ordering(219) 00:12:20.429 fused_ordering(220) 00:12:20.429 fused_ordering(221) 00:12:20.429 fused_ordering(222) 00:12:20.429 fused_ordering(223) 00:12:20.429 fused_ordering(224) 00:12:20.429 fused_ordering(225) 00:12:20.429 fused_ordering(226) 00:12:20.429 fused_ordering(227) 00:12:20.429 fused_ordering(228) 00:12:20.429 fused_ordering(229) 00:12:20.429 fused_ordering(230) 00:12:20.429 fused_ordering(231) 00:12:20.429 fused_ordering(232) 00:12:20.429 fused_ordering(233) 00:12:20.429 fused_ordering(234) 00:12:20.429 fused_ordering(235) 00:12:20.429 fused_ordering(236) 00:12:20.429 fused_ordering(237) 00:12:20.429 fused_ordering(238) 00:12:20.429 fused_ordering(239) 00:12:20.429 fused_ordering(240) 00:12:20.429 fused_ordering(241) 00:12:20.429 fused_ordering(242) 00:12:20.429 fused_ordering(243) 00:12:20.429 fused_ordering(244) 00:12:20.429 fused_ordering(245) 00:12:20.429 fused_ordering(246) 00:12:20.429 fused_ordering(247) 00:12:20.429 fused_ordering(248) 00:12:20.429 fused_ordering(249) 00:12:20.429 fused_ordering(250) 00:12:20.429 fused_ordering(251) 00:12:20.429 fused_ordering(252) 00:12:20.429 fused_ordering(253) 00:12:20.429 fused_ordering(254) 00:12:20.429 fused_ordering(255) 00:12:20.429 fused_ordering(256) 00:12:20.429 fused_ordering(257) 00:12:20.429 fused_ordering(258) 00:12:20.429 fused_ordering(259) 00:12:20.429 fused_ordering(260) 00:12:20.429 fused_ordering(261) 00:12:20.429 fused_ordering(262) 00:12:20.429 fused_ordering(263) 00:12:20.429 fused_ordering(264) 00:12:20.429 fused_ordering(265) 00:12:20.429 fused_ordering(266) 00:12:20.429 fused_ordering(267) 00:12:20.429 fused_ordering(268) 00:12:20.429 fused_ordering(269) 00:12:20.429 fused_ordering(270) 00:12:20.429 fused_ordering(271) 00:12:20.429 fused_ordering(272) 00:12:20.429 fused_ordering(273) 00:12:20.429 fused_ordering(274) 00:12:20.429 fused_ordering(275) 00:12:20.429 fused_ordering(276) 00:12:20.429 fused_ordering(277) 00:12:20.429 fused_ordering(278) 00:12:20.429 fused_ordering(279) 00:12:20.429 fused_ordering(280) 00:12:20.429 fused_ordering(281) 00:12:20.429 fused_ordering(282) 00:12:20.429 fused_ordering(283) 00:12:20.429 fused_ordering(284) 00:12:20.429 fused_ordering(285) 00:12:20.429 fused_ordering(286) 00:12:20.429 fused_ordering(287) 00:12:20.429 fused_ordering(288) 00:12:20.429 fused_ordering(289) 00:12:20.429 fused_ordering(290) 00:12:20.429 fused_ordering(291) 00:12:20.429 fused_ordering(292) 00:12:20.429 fused_ordering(293) 00:12:20.429 fused_ordering(294) 00:12:20.429 fused_ordering(295) 00:12:20.429 fused_ordering(296) 00:12:20.429 fused_ordering(297) 00:12:20.429 fused_ordering(298) 00:12:20.429 fused_ordering(299) 00:12:20.429 fused_ordering(300) 00:12:20.429 fused_ordering(301) 00:12:20.429 fused_ordering(302) 00:12:20.429 fused_ordering(303) 00:12:20.429 fused_ordering(304) 00:12:20.429 fused_ordering(305) 00:12:20.429 fused_ordering(306) 00:12:20.429 fused_ordering(307) 00:12:20.429 fused_ordering(308) 00:12:20.429 fused_ordering(309) 00:12:20.429 fused_ordering(310) 00:12:20.429 fused_ordering(311) 00:12:20.429 fused_ordering(312) 00:12:20.429 fused_ordering(313) 00:12:20.429 fused_ordering(314) 00:12:20.429 fused_ordering(315) 00:12:20.429 fused_ordering(316) 00:12:20.429 fused_ordering(317) 00:12:20.429 fused_ordering(318) 00:12:20.429 fused_ordering(319) 00:12:20.429 fused_ordering(320) 00:12:20.429 fused_ordering(321) 00:12:20.429 fused_ordering(322) 00:12:20.429 fused_ordering(323) 00:12:20.429 fused_ordering(324) 00:12:20.429 fused_ordering(325) 00:12:20.429 fused_ordering(326) 00:12:20.429 fused_ordering(327) 00:12:20.429 fused_ordering(328) 00:12:20.429 fused_ordering(329) 00:12:20.429 fused_ordering(330) 00:12:20.429 fused_ordering(331) 00:12:20.429 fused_ordering(332) 00:12:20.429 fused_ordering(333) 00:12:20.429 fused_ordering(334) 00:12:20.429 fused_ordering(335) 00:12:20.429 fused_ordering(336) 00:12:20.429 fused_ordering(337) 00:12:20.429 fused_ordering(338) 00:12:20.429 fused_ordering(339) 00:12:20.429 fused_ordering(340) 00:12:20.429 fused_ordering(341) 00:12:20.429 fused_ordering(342) 00:12:20.429 fused_ordering(343) 00:12:20.429 fused_ordering(344) 00:12:20.429 fused_ordering(345) 00:12:20.429 fused_ordering(346) 00:12:20.429 fused_ordering(347) 00:12:20.429 fused_ordering(348) 00:12:20.429 fused_ordering(349) 00:12:20.429 fused_ordering(350) 00:12:20.429 fused_ordering(351) 00:12:20.429 fused_ordering(352) 00:12:20.429 fused_ordering(353) 00:12:20.429 fused_ordering(354) 00:12:20.429 fused_ordering(355) 00:12:20.429 fused_ordering(356) 00:12:20.429 fused_ordering(357) 00:12:20.429 fused_ordering(358) 00:12:20.429 fused_ordering(359) 00:12:20.429 fused_ordering(360) 00:12:20.429 fused_ordering(361) 00:12:20.429 fused_ordering(362) 00:12:20.429 fused_ordering(363) 00:12:20.429 fused_ordering(364) 00:12:20.429 fused_ordering(365) 00:12:20.429 fused_ordering(366) 00:12:20.429 fused_ordering(367) 00:12:20.429 fused_ordering(368) 00:12:20.429 fused_ordering(369) 00:12:20.429 fused_ordering(370) 00:12:20.429 fused_ordering(371) 00:12:20.429 fused_ordering(372) 00:12:20.429 fused_ordering(373) 00:12:20.429 fused_ordering(374) 00:12:20.429 fused_ordering(375) 00:12:20.429 fused_ordering(376) 00:12:20.429 fused_ordering(377) 00:12:20.429 fused_ordering(378) 00:12:20.429 fused_ordering(379) 00:12:20.429 fused_ordering(380) 00:12:20.429 fused_ordering(381) 00:12:20.429 fused_ordering(382) 00:12:20.429 fused_ordering(383) 00:12:20.429 fused_ordering(384) 00:12:20.429 fused_ordering(385) 00:12:20.429 fused_ordering(386) 00:12:20.429 fused_ordering(387) 00:12:20.429 fused_ordering(388) 00:12:20.429 fused_ordering(389) 00:12:20.429 fused_ordering(390) 00:12:20.429 fused_ordering(391) 00:12:20.429 fused_ordering(392) 00:12:20.430 fused_ordering(393) 00:12:20.430 fused_ordering(394) 00:12:20.430 fused_ordering(395) 00:12:20.430 fused_ordering(396) 00:12:20.430 fused_ordering(397) 00:12:20.430 fused_ordering(398) 00:12:20.430 fused_ordering(399) 00:12:20.430 fused_ordering(400) 00:12:20.430 fused_ordering(401) 00:12:20.430 fused_ordering(402) 00:12:20.430 fused_ordering(403) 00:12:20.430 fused_ordering(404) 00:12:20.430 fused_ordering(405) 00:12:20.430 fused_ordering(406) 00:12:20.430 fused_ordering(407) 00:12:20.430 fused_ordering(408) 00:12:20.430 fused_ordering(409) 00:12:20.430 fused_ordering(410) 00:12:20.688 fused_ordering(411) 00:12:20.688 fused_ordering(412) 00:12:20.688 fused_ordering(413) 00:12:20.688 fused_ordering(414) 00:12:20.688 fused_ordering(415) 00:12:20.688 fused_ordering(416) 00:12:20.688 fused_ordering(417) 00:12:20.688 fused_ordering(418) 00:12:20.688 fused_ordering(419) 00:12:20.688 fused_ordering(420) 00:12:20.688 fused_ordering(421) 00:12:20.688 fused_ordering(422) 00:12:20.688 fused_ordering(423) 00:12:20.688 fused_ordering(424) 00:12:20.688 fused_ordering(425) 00:12:20.688 fused_ordering(426) 00:12:20.688 fused_ordering(427) 00:12:20.688 fused_ordering(428) 00:12:20.688 fused_ordering(429) 00:12:20.688 fused_ordering(430) 00:12:20.688 fused_ordering(431) 00:12:20.688 fused_ordering(432) 00:12:20.688 fused_ordering(433) 00:12:20.688 fused_ordering(434) 00:12:20.688 fused_ordering(435) 00:12:20.688 fused_ordering(436) 00:12:20.688 fused_ordering(437) 00:12:20.688 fused_ordering(438) 00:12:20.688 fused_ordering(439) 00:12:20.688 fused_ordering(440) 00:12:20.688 fused_ordering(441) 00:12:20.688 fused_ordering(442) 00:12:20.688 fused_ordering(443) 00:12:20.688 fused_ordering(444) 00:12:20.688 fused_ordering(445) 00:12:20.688 fused_ordering(446) 00:12:20.688 fused_ordering(447) 00:12:20.688 fused_ordering(448) 00:12:20.688 fused_ordering(449) 00:12:20.688 fused_ordering(450) 00:12:20.688 fused_ordering(451) 00:12:20.688 fused_ordering(452) 00:12:20.688 fused_ordering(453) 00:12:20.688 fused_ordering(454) 00:12:20.688 fused_ordering(455) 00:12:20.688 fused_ordering(456) 00:12:20.688 fused_ordering(457) 00:12:20.688 fused_ordering(458) 00:12:20.688 fused_ordering(459) 00:12:20.688 fused_ordering(460) 00:12:20.688 fused_ordering(461) 00:12:20.688 fused_ordering(462) 00:12:20.688 fused_ordering(463) 00:12:20.688 fused_ordering(464) 00:12:20.688 fused_ordering(465) 00:12:20.688 fused_ordering(466) 00:12:20.688 fused_ordering(467) 00:12:20.688 fused_ordering(468) 00:12:20.688 fused_ordering(469) 00:12:20.688 fused_ordering(470) 00:12:20.688 fused_ordering(471) 00:12:20.688 fused_ordering(472) 00:12:20.688 fused_ordering(473) 00:12:20.688 fused_ordering(474) 00:12:20.688 fused_ordering(475) 00:12:20.688 fused_ordering(476) 00:12:20.688 fused_ordering(477) 00:12:20.688 fused_ordering(478) 00:12:20.688 fused_ordering(479) 00:12:20.688 fused_ordering(480) 00:12:20.688 fused_ordering(481) 00:12:20.688 fused_ordering(482) 00:12:20.688 fused_ordering(483) 00:12:20.688 fused_ordering(484) 00:12:20.688 fused_ordering(485) 00:12:20.688 fused_ordering(486) 00:12:20.688 fused_ordering(487) 00:12:20.688 fused_ordering(488) 00:12:20.688 fused_ordering(489) 00:12:20.688 fused_ordering(490) 00:12:20.688 fused_ordering(491) 00:12:20.688 fused_ordering(492) 00:12:20.688 fused_ordering(493) 00:12:20.688 fused_ordering(494) 00:12:20.688 fused_ordering(495) 00:12:20.688 fused_ordering(496) 00:12:20.688 fused_ordering(497) 00:12:20.688 fused_ordering(498) 00:12:20.688 fused_ordering(499) 00:12:20.688 fused_ordering(500) 00:12:20.688 fused_ordering(501) 00:12:20.688 fused_ordering(502) 00:12:20.688 fused_ordering(503) 00:12:20.688 fused_ordering(504) 00:12:20.688 fused_ordering(505) 00:12:20.688 fused_ordering(506) 00:12:20.688 fused_ordering(507) 00:12:20.688 fused_ordering(508) 00:12:20.688 fused_ordering(509) 00:12:20.688 fused_ordering(510) 00:12:20.688 fused_ordering(511) 00:12:20.688 fused_ordering(512) 00:12:20.688 fused_ordering(513) 00:12:20.688 fused_ordering(514) 00:12:20.688 fused_ordering(515) 00:12:20.688 fused_ordering(516) 00:12:20.688 fused_ordering(517) 00:12:20.688 fused_ordering(518) 00:12:20.688 fused_ordering(519) 00:12:20.688 fused_ordering(520) 00:12:20.688 fused_ordering(521) 00:12:20.688 fused_ordering(522) 00:12:20.688 fused_ordering(523) 00:12:20.688 fused_ordering(524) 00:12:20.688 fused_ordering(525) 00:12:20.688 fused_ordering(526) 00:12:20.688 fused_ordering(527) 00:12:20.688 fused_ordering(528) 00:12:20.688 fused_ordering(529) 00:12:20.688 fused_ordering(530) 00:12:20.688 fused_ordering(531) 00:12:20.688 fused_ordering(532) 00:12:20.688 fused_ordering(533) 00:12:20.688 fused_ordering(534) 00:12:20.688 fused_ordering(535) 00:12:20.688 fused_ordering(536) 00:12:20.688 fused_ordering(537) 00:12:20.688 fused_ordering(538) 00:12:20.688 fused_ordering(539) 00:12:20.688 fused_ordering(540) 00:12:20.688 fused_ordering(541) 00:12:20.688 fused_ordering(542) 00:12:20.688 fused_ordering(543) 00:12:20.688 fused_ordering(544) 00:12:20.688 fused_ordering(545) 00:12:20.688 fused_ordering(546) 00:12:20.688 fused_ordering(547) 00:12:20.688 fused_ordering(548) 00:12:20.688 fused_ordering(549) 00:12:20.688 fused_ordering(550) 00:12:20.688 fused_ordering(551) 00:12:20.688 fused_ordering(552) 00:12:20.688 fused_ordering(553) 00:12:20.688 fused_ordering(554) 00:12:20.688 fused_ordering(555) 00:12:20.688 fused_ordering(556) 00:12:20.688 fused_ordering(557) 00:12:20.688 fused_ordering(558) 00:12:20.688 fused_ordering(559) 00:12:20.688 fused_ordering(560) 00:12:20.688 fused_ordering(561) 00:12:20.688 fused_ordering(562) 00:12:20.688 fused_ordering(563) 00:12:20.688 fused_ordering(564) 00:12:20.688 fused_ordering(565) 00:12:20.688 fused_ordering(566) 00:12:20.688 fused_ordering(567) 00:12:20.688 fused_ordering(568) 00:12:20.688 fused_ordering(569) 00:12:20.688 fused_ordering(570) 00:12:20.688 fused_ordering(571) 00:12:20.688 fused_ordering(572) 00:12:20.688 fused_ordering(573) 00:12:20.688 fused_ordering(574) 00:12:20.688 fused_ordering(575) 00:12:20.688 fused_ordering(576) 00:12:20.688 fused_ordering(577) 00:12:20.688 fused_ordering(578) 00:12:20.688 fused_ordering(579) 00:12:20.688 fused_ordering(580) 00:12:20.689 fused_ordering(581) 00:12:20.689 fused_ordering(582) 00:12:20.689 fused_ordering(583) 00:12:20.689 fused_ordering(584) 00:12:20.689 fused_ordering(585) 00:12:20.689 fused_ordering(586) 00:12:20.689 fused_ordering(587) 00:12:20.689 fused_ordering(588) 00:12:20.689 fused_ordering(589) 00:12:20.689 fused_ordering(590) 00:12:20.689 fused_ordering(591) 00:12:20.689 fused_ordering(592) 00:12:20.689 fused_ordering(593) 00:12:20.689 fused_ordering(594) 00:12:20.689 fused_ordering(595) 00:12:20.689 fused_ordering(596) 00:12:20.689 fused_ordering(597) 00:12:20.689 fused_ordering(598) 00:12:20.689 fused_ordering(599) 00:12:20.689 fused_ordering(600) 00:12:20.689 fused_ordering(601) 00:12:20.689 fused_ordering(602) 00:12:20.689 fused_ordering(603) 00:12:20.689 fused_ordering(604) 00:12:20.689 fused_ordering(605) 00:12:20.689 fused_ordering(606) 00:12:20.689 fused_ordering(607) 00:12:20.689 fused_ordering(608) 00:12:20.689 fused_ordering(609) 00:12:20.689 fused_ordering(610) 00:12:20.689 fused_ordering(611) 00:12:20.689 fused_ordering(612) 00:12:20.689 fused_ordering(613) 00:12:20.689 fused_ordering(614) 00:12:20.689 fused_ordering(615) 00:12:21.281 fused_ordering(616) 00:12:21.281 fused_ordering(617) 00:12:21.281 fused_ordering(618) 00:12:21.281 fused_ordering(619) 00:12:21.281 fused_ordering(620) 00:12:21.281 fused_ordering(621) 00:12:21.281 fused_ordering(622) 00:12:21.281 fused_ordering(623) 00:12:21.281 fused_ordering(624) 00:12:21.281 fused_ordering(625) 00:12:21.281 fused_ordering(626) 00:12:21.281 fused_ordering(627) 00:12:21.281 fused_ordering(628) 00:12:21.281 fused_ordering(629) 00:12:21.281 fused_ordering(630) 00:12:21.281 fused_ordering(631) 00:12:21.281 fused_ordering(632) 00:12:21.281 fused_ordering(633) 00:12:21.281 fused_ordering(634) 00:12:21.281 fused_ordering(635) 00:12:21.281 fused_ordering(636) 00:12:21.281 fused_ordering(637) 00:12:21.281 fused_ordering(638) 00:12:21.281 fused_ordering(639) 00:12:21.281 fused_ordering(640) 00:12:21.281 fused_ordering(641) 00:12:21.281 fused_ordering(642) 00:12:21.281 fused_ordering(643) 00:12:21.281 fused_ordering(644) 00:12:21.281 fused_ordering(645) 00:12:21.281 fused_ordering(646) 00:12:21.281 fused_ordering(647) 00:12:21.281 fused_ordering(648) 00:12:21.281 fused_ordering(649) 00:12:21.281 fused_ordering(650) 00:12:21.281 fused_ordering(651) 00:12:21.281 fused_ordering(652) 00:12:21.281 fused_ordering(653) 00:12:21.281 fused_ordering(654) 00:12:21.281 fused_ordering(655) 00:12:21.281 fused_ordering(656) 00:12:21.281 fused_ordering(657) 00:12:21.281 fused_ordering(658) 00:12:21.281 fused_ordering(659) 00:12:21.281 fused_ordering(660) 00:12:21.281 fused_ordering(661) 00:12:21.281 fused_ordering(662) 00:12:21.281 fused_ordering(663) 00:12:21.281 fused_ordering(664) 00:12:21.281 fused_ordering(665) 00:12:21.281 fused_ordering(666) 00:12:21.281 fused_ordering(667) 00:12:21.281 fused_ordering(668) 00:12:21.281 fused_ordering(669) 00:12:21.281 fused_ordering(670) 00:12:21.281 fused_ordering(671) 00:12:21.281 fused_ordering(672) 00:12:21.281 fused_ordering(673) 00:12:21.281 fused_ordering(674) 00:12:21.281 fused_ordering(675) 00:12:21.281 fused_ordering(676) 00:12:21.281 fused_ordering(677) 00:12:21.281 fused_ordering(678) 00:12:21.281 fused_ordering(679) 00:12:21.281 fused_ordering(680) 00:12:21.281 fused_ordering(681) 00:12:21.281 fused_ordering(682) 00:12:21.281 fused_ordering(683) 00:12:21.281 fused_ordering(684) 00:12:21.281 fused_ordering(685) 00:12:21.281 fused_ordering(686) 00:12:21.281 fused_ordering(687) 00:12:21.281 fused_ordering(688) 00:12:21.281 fused_ordering(689) 00:12:21.281 fused_ordering(690) 00:12:21.281 fused_ordering(691) 00:12:21.281 fused_ordering(692) 00:12:21.281 fused_ordering(693) 00:12:21.281 fused_ordering(694) 00:12:21.281 fused_ordering(695) 00:12:21.281 fused_ordering(696) 00:12:21.281 fused_ordering(697) 00:12:21.281 fused_ordering(698) 00:12:21.281 fused_ordering(699) 00:12:21.281 fused_ordering(700) 00:12:21.281 fused_ordering(701) 00:12:21.281 fused_ordering(702) 00:12:21.281 fused_ordering(703) 00:12:21.281 fused_ordering(704) 00:12:21.281 fused_ordering(705) 00:12:21.281 fused_ordering(706) 00:12:21.281 fused_ordering(707) 00:12:21.282 fused_ordering(708) 00:12:21.282 fused_ordering(709) 00:12:21.282 fused_ordering(710) 00:12:21.282 fused_ordering(711) 00:12:21.282 fused_ordering(712) 00:12:21.282 fused_ordering(713) 00:12:21.282 fused_ordering(714) 00:12:21.282 fused_ordering(715) 00:12:21.282 fused_ordering(716) 00:12:21.282 fused_ordering(717) 00:12:21.282 fused_ordering(718) 00:12:21.282 fused_ordering(719) 00:12:21.282 fused_ordering(720) 00:12:21.282 fused_ordering(721) 00:12:21.282 fused_ordering(722) 00:12:21.282 fused_ordering(723) 00:12:21.282 fused_ordering(724) 00:12:21.282 fused_ordering(725) 00:12:21.282 fused_ordering(726) 00:12:21.282 fused_ordering(727) 00:12:21.282 fused_ordering(728) 00:12:21.282 fused_ordering(729) 00:12:21.282 fused_ordering(730) 00:12:21.282 fused_ordering(731) 00:12:21.282 fused_ordering(732) 00:12:21.282 fused_ordering(733) 00:12:21.282 fused_ordering(734) 00:12:21.282 fused_ordering(735) 00:12:21.282 fused_ordering(736) 00:12:21.282 fused_ordering(737) 00:12:21.282 fused_ordering(738) 00:12:21.282 fused_ordering(739) 00:12:21.282 fused_ordering(740) 00:12:21.282 fused_ordering(741) 00:12:21.282 fused_ordering(742) 00:12:21.282 fused_ordering(743) 00:12:21.282 fused_ordering(744) 00:12:21.282 fused_ordering(745) 00:12:21.282 fused_ordering(746) 00:12:21.282 fused_ordering(747) 00:12:21.282 fused_ordering(748) 00:12:21.282 fused_ordering(749) 00:12:21.282 fused_ordering(750) 00:12:21.282 fused_ordering(751) 00:12:21.282 fused_ordering(752) 00:12:21.282 fused_ordering(753) 00:12:21.282 fused_ordering(754) 00:12:21.282 fused_ordering(755) 00:12:21.282 fused_ordering(756) 00:12:21.282 fused_ordering(757) 00:12:21.282 fused_ordering(758) 00:12:21.282 fused_ordering(759) 00:12:21.282 fused_ordering(760) 00:12:21.282 fused_ordering(761) 00:12:21.282 fused_ordering(762) 00:12:21.282 fused_ordering(763) 00:12:21.282 fused_ordering(764) 00:12:21.282 fused_ordering(765) 00:12:21.282 fused_ordering(766) 00:12:21.282 fused_ordering(767) 00:12:21.282 fused_ordering(768) 00:12:21.282 fused_ordering(769) 00:12:21.282 fused_ordering(770) 00:12:21.282 fused_ordering(771) 00:12:21.282 fused_ordering(772) 00:12:21.282 fused_ordering(773) 00:12:21.282 fused_ordering(774) 00:12:21.282 fused_ordering(775) 00:12:21.282 fused_ordering(776) 00:12:21.282 fused_ordering(777) 00:12:21.282 fused_ordering(778) 00:12:21.282 fused_ordering(779) 00:12:21.282 fused_ordering(780) 00:12:21.282 fused_ordering(781) 00:12:21.282 fused_ordering(782) 00:12:21.282 fused_ordering(783) 00:12:21.282 fused_ordering(784) 00:12:21.282 fused_ordering(785) 00:12:21.282 fused_ordering(786) 00:12:21.282 fused_ordering(787) 00:12:21.282 fused_ordering(788) 00:12:21.282 fused_ordering(789) 00:12:21.282 fused_ordering(790) 00:12:21.282 fused_ordering(791) 00:12:21.282 fused_ordering(792) 00:12:21.282 fused_ordering(793) 00:12:21.282 fused_ordering(794) 00:12:21.282 fused_ordering(795) 00:12:21.282 fused_ordering(796) 00:12:21.282 fused_ordering(797) 00:12:21.282 fused_ordering(798) 00:12:21.282 fused_ordering(799) 00:12:21.282 fused_ordering(800) 00:12:21.282 fused_ordering(801) 00:12:21.282 fused_ordering(802) 00:12:21.282 fused_ordering(803) 00:12:21.282 fused_ordering(804) 00:12:21.282 fused_ordering(805) 00:12:21.282 fused_ordering(806) 00:12:21.282 fused_ordering(807) 00:12:21.282 fused_ordering(808) 00:12:21.282 fused_ordering(809) 00:12:21.282 fused_ordering(810) 00:12:21.282 fused_ordering(811) 00:12:21.282 fused_ordering(812) 00:12:21.282 fused_ordering(813) 00:12:21.282 fused_ordering(814) 00:12:21.282 fused_ordering(815) 00:12:21.282 fused_ordering(816) 00:12:21.282 fused_ordering(817) 00:12:21.282 fused_ordering(818) 00:12:21.282 fused_ordering(819) 00:12:21.282 fused_ordering(820) 00:12:21.848 fused_ordering(821) 00:12:21.848 fused_ordering(822) 00:12:21.848 fused_ordering(823) 00:12:21.848 fused_ordering(824) 00:12:21.848 fused_ordering(825) 00:12:21.848 fused_ordering(826) 00:12:21.848 fused_ordering(827) 00:12:21.848 fused_ordering(828) 00:12:21.848 fused_ordering(829) 00:12:21.848 fused_ordering(830) 00:12:21.848 fused_ordering(831) 00:12:21.848 fused_ordering(832) 00:12:21.848 fused_ordering(833) 00:12:21.848 fused_ordering(834) 00:12:21.848 fused_ordering(835) 00:12:21.848 fused_ordering(836) 00:12:21.848 fused_ordering(837) 00:12:21.848 fused_ordering(838) 00:12:21.848 fused_ordering(839) 00:12:21.848 fused_ordering(840) 00:12:21.848 fused_ordering(841) 00:12:21.848 fused_ordering(842) 00:12:21.848 fused_ordering(843) 00:12:21.848 fused_ordering(844) 00:12:21.848 fused_ordering(845) 00:12:21.848 fused_ordering(846) 00:12:21.848 fused_ordering(847) 00:12:21.848 fused_ordering(848) 00:12:21.848 fused_ordering(849) 00:12:21.848 fused_ordering(850) 00:12:21.848 fused_ordering(851) 00:12:21.848 fused_ordering(852) 00:12:21.848 fused_ordering(853) 00:12:21.848 fused_ordering(854) 00:12:21.848 fused_ordering(855) 00:12:21.848 fused_ordering(856) 00:12:21.848 fused_ordering(857) 00:12:21.848 fused_ordering(858) 00:12:21.848 fused_ordering(859) 00:12:21.848 fused_ordering(860) 00:12:21.848 fused_ordering(861) 00:12:21.848 fused_ordering(862) 00:12:21.848 fused_ordering(863) 00:12:21.848 fused_ordering(864) 00:12:21.848 fused_ordering(865) 00:12:21.848 fused_ordering(866) 00:12:21.848 fused_ordering(867) 00:12:21.848 fused_ordering(868) 00:12:21.848 fused_ordering(869) 00:12:21.848 fused_ordering(870) 00:12:21.848 fused_ordering(871) 00:12:21.848 fused_ordering(872) 00:12:21.848 fused_ordering(873) 00:12:21.848 fused_ordering(874) 00:12:21.848 fused_ordering(875) 00:12:21.848 fused_ordering(876) 00:12:21.848 fused_ordering(877) 00:12:21.848 fused_ordering(878) 00:12:21.848 fused_ordering(879) 00:12:21.848 fused_ordering(880) 00:12:21.848 fused_ordering(881) 00:12:21.848 fused_ordering(882) 00:12:21.848 fused_ordering(883) 00:12:21.848 fused_ordering(884) 00:12:21.848 fused_ordering(885) 00:12:21.848 fused_ordering(886) 00:12:21.848 fused_ordering(887) 00:12:21.848 fused_ordering(888) 00:12:21.848 fused_ordering(889) 00:12:21.848 fused_ordering(890) 00:12:21.848 fused_ordering(891) 00:12:21.848 fused_ordering(892) 00:12:21.848 fused_ordering(893) 00:12:21.848 fused_ordering(894) 00:12:21.849 fused_ordering(895) 00:12:21.849 fused_ordering(896) 00:12:21.849 fused_ordering(897) 00:12:21.849 fused_ordering(898) 00:12:21.849 fused_ordering(899) 00:12:21.849 fused_ordering(900) 00:12:21.849 fused_ordering(901) 00:12:21.849 fused_ordering(902) 00:12:21.849 fused_ordering(903) 00:12:21.849 fused_ordering(904) 00:12:21.849 fused_ordering(905) 00:12:21.849 fused_ordering(906) 00:12:21.849 fused_ordering(907) 00:12:21.849 fused_ordering(908) 00:12:21.849 fused_ordering(909) 00:12:21.849 fused_ordering(910) 00:12:21.849 fused_ordering(911) 00:12:21.849 fused_ordering(912) 00:12:21.849 fused_ordering(913) 00:12:21.849 fused_ordering(914) 00:12:21.849 fused_ordering(915) 00:12:21.849 fused_ordering(916) 00:12:21.849 fused_ordering(917) 00:12:21.849 fused_ordering(918) 00:12:21.849 fused_ordering(919) 00:12:21.849 fused_ordering(920) 00:12:21.849 fused_ordering(921) 00:12:21.849 fused_ordering(922) 00:12:21.849 fused_ordering(923) 00:12:21.849 fused_ordering(924) 00:12:21.849 fused_ordering(925) 00:12:21.849 fused_ordering(926) 00:12:21.849 fused_ordering(927) 00:12:21.849 fused_ordering(928) 00:12:21.849 fused_ordering(929) 00:12:21.849 fused_ordering(930) 00:12:21.849 fused_ordering(931) 00:12:21.849 fused_ordering(932) 00:12:21.849 fused_ordering(933) 00:12:21.849 fused_ordering(934) 00:12:21.849 fused_ordering(935) 00:12:21.849 fused_ordering(936) 00:12:21.849 fused_ordering(937) 00:12:21.849 fused_ordering(938) 00:12:21.849 fused_ordering(939) 00:12:21.849 fused_ordering(940) 00:12:21.849 fused_ordering(941) 00:12:21.849 fused_ordering(942) 00:12:21.849 fused_ordering(943) 00:12:21.849 fused_ordering(944) 00:12:21.849 fused_ordering(945) 00:12:21.849 fused_ordering(946) 00:12:21.849 fused_ordering(947) 00:12:21.849 fused_ordering(948) 00:12:21.849 fused_ordering(949) 00:12:21.849 fused_ordering(950) 00:12:21.849 fused_ordering(951) 00:12:21.849 fused_ordering(952) 00:12:21.849 fused_ordering(953) 00:12:21.849 fused_ordering(954) 00:12:21.849 fused_ordering(955) 00:12:21.849 fused_ordering(956) 00:12:21.849 fused_ordering(957) 00:12:21.849 fused_ordering(958) 00:12:21.849 fused_ordering(959) 00:12:21.849 fused_ordering(960) 00:12:21.849 fused_ordering(961) 00:12:21.849 fused_ordering(962) 00:12:21.849 fused_ordering(963) 00:12:21.849 fused_ordering(964) 00:12:21.849 fused_ordering(965) 00:12:21.849 fused_ordering(966) 00:12:21.849 fused_ordering(967) 00:12:21.849 fused_ordering(968) 00:12:21.849 fused_ordering(969) 00:12:21.849 fused_ordering(970) 00:12:21.849 fused_ordering(971) 00:12:21.849 fused_ordering(972) 00:12:21.849 fused_ordering(973) 00:12:21.849 fused_ordering(974) 00:12:21.849 fused_ordering(975) 00:12:21.849 fused_ordering(976) 00:12:21.849 fused_ordering(977) 00:12:21.849 fused_ordering(978) 00:12:21.849 fused_ordering(979) 00:12:21.849 fused_ordering(980) 00:12:21.849 fused_ordering(981) 00:12:21.849 fused_ordering(982) 00:12:21.849 fused_ordering(983) 00:12:21.849 fused_ordering(984) 00:12:21.849 fused_ordering(985) 00:12:21.849 fused_ordering(986) 00:12:21.849 fused_ordering(987) 00:12:21.849 fused_ordering(988) 00:12:21.849 fused_ordering(989) 00:12:21.849 fused_ordering(990) 00:12:21.849 fused_ordering(991) 00:12:21.849 fused_ordering(992) 00:12:21.849 fused_ordering(993) 00:12:21.849 fused_ordering(994) 00:12:21.849 fused_ordering(995) 00:12:21.849 fused_ordering(996) 00:12:21.849 fused_ordering(997) 00:12:21.849 fused_ordering(998) 00:12:21.849 fused_ordering(999) 00:12:21.849 fused_ordering(1000) 00:12:21.849 fused_ordering(1001) 00:12:21.849 fused_ordering(1002) 00:12:21.849 fused_ordering(1003) 00:12:21.849 fused_ordering(1004) 00:12:21.849 fused_ordering(1005) 00:12:21.849 fused_ordering(1006) 00:12:21.849 fused_ordering(1007) 00:12:21.849 fused_ordering(1008) 00:12:21.849 fused_ordering(1009) 00:12:21.849 fused_ordering(1010) 00:12:21.849 fused_ordering(1011) 00:12:21.849 fused_ordering(1012) 00:12:21.849 fused_ordering(1013) 00:12:21.849 fused_ordering(1014) 00:12:21.849 fused_ordering(1015) 00:12:21.849 fused_ordering(1016) 00:12:21.849 fused_ordering(1017) 00:12:21.849 fused_ordering(1018) 00:12:21.849 fused_ordering(1019) 00:12:21.849 fused_ordering(1020) 00:12:21.849 fused_ordering(1021) 00:12:21.849 fused_ordering(1022) 00:12:21.849 fused_ordering(1023) 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.849 rmmod nvme_tcp 00:12:21.849 rmmod nvme_fabrics 00:12:21.849 rmmod nvme_keyring 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3293474 ']' 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3293474 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 3293474 ']' 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 3293474 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:21.849 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3293474 00:12:22.108 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:22.108 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:22.108 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3293474' 00:12:22.108 killing process with pid 3293474 00:12:22.108 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 3293474 00:12:22.108 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 3293474 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.108 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.648 00:12:24.648 real 0m7.509s 00:12:24.648 user 0m5.071s 00:12:24.648 sys 0m3.068s 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:24.648 ************************************ 00:12:24.648 END TEST nvmf_fused_ordering 00:12:24.648 ************************************ 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.648 ************************************ 00:12:24.648 START TEST nvmf_ns_masking 00:12:24.648 ************************************ 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:24.648 * Looking for test storage... 00:12:24.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:24.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.648 --rc genhtml_branch_coverage=1 00:12:24.648 --rc genhtml_function_coverage=1 00:12:24.648 --rc genhtml_legend=1 00:12:24.648 --rc geninfo_all_blocks=1 00:12:24.648 --rc geninfo_unexecuted_blocks=1 00:12:24.648 00:12:24.648 ' 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:24.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.648 --rc genhtml_branch_coverage=1 00:12:24.648 --rc genhtml_function_coverage=1 00:12:24.648 --rc genhtml_legend=1 00:12:24.648 --rc geninfo_all_blocks=1 00:12:24.648 --rc geninfo_unexecuted_blocks=1 00:12:24.648 00:12:24.648 ' 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:24.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.648 --rc genhtml_branch_coverage=1 00:12:24.648 --rc genhtml_function_coverage=1 00:12:24.648 --rc genhtml_legend=1 00:12:24.648 --rc geninfo_all_blocks=1 00:12:24.648 --rc geninfo_unexecuted_blocks=1 00:12:24.648 00:12:24.648 ' 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:24.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.648 --rc genhtml_branch_coverage=1 00:12:24.648 --rc genhtml_function_coverage=1 00:12:24.648 --rc genhtml_legend=1 00:12:24.648 --rc geninfo_all_blocks=1 00:12:24.648 --rc geninfo_unexecuted_blocks=1 00:12:24.648 00:12:24.648 ' 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.648 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2d711de5-f574-4e93-9a22-c0d3d443042d 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=81764b79-8421-4ee5-9467-3ddb3d04400b 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9e8a3654-e606-49a7-a6a4-96faad668f65 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.649 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:26.551 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.551 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.551 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.551 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.551 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.551 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:26.552 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:26.552 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:26.552 Found net devices under 0000:09:00.0: cvl_0_0 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:26.552 Found net devices under 0000:09:00.1: cvl_0_1 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.552 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:26.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:12:26.811 00:12:26.811 --- 10.0.0.2 ping statistics --- 00:12:26.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.811 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:12:26.811 00:12:26.811 --- 10.0.0.1 ping statistics --- 00:12:26.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.811 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3295934 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3295934 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3295934 ']' 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:26.811 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:26.811 [2024-11-20 09:03:03.751778] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:12:26.811 [2024-11-20 09:03:03.751849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.811 [2024-11-20 09:03:03.821673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.069 [2024-11-20 09:03:03.877185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.069 [2024-11-20 09:03:03.877239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.069 [2024-11-20 09:03:03.877259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.069 [2024-11-20 09:03:03.877275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.069 [2024-11-20 09:03:03.877289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.069 [2024-11-20 09:03:03.877899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.069 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.069 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:27.069 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.069 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.069 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:27.069 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.069 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:27.327 [2024-11-20 09:03:04.315824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.327 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:27.327 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:27.327 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:27.892 Malloc1 00:12:27.892 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:28.149 Malloc2 00:12:28.149 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.406 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:28.663 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.920 [2024-11-20 09:03:05.821222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.920 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:28.920 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9e8a3654-e606-49a7-a6a4-96faad668f65 -a 10.0.0.2 -s 4420 -i 4 00:12:29.178 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.178 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:29.178 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.178 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:29.178 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:31.076 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:31.076 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:31.076 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.076 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:31.076 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.076 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:31.076 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:31.076 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:31.076 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:31.076 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:31.076 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:31.076 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.076 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.076 [ 0]:0x1 00:12:31.077 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.077 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.334 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1e6b13aaa9d841789ebe98d11c7de176 00:12:31.334 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1e6b13aaa9d841789ebe98d11c7de176 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.334 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.593 [ 0]:0x1 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1e6b13aaa9d841789ebe98d11c7de176 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1e6b13aaa9d841789ebe98d11c7de176 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.593 [ 1]:0x2 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75770506f089438f9496cba2202ff9cc 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75770506f089438f9496cba2202ff9cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.593 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.890 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:32.183 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:32.183 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9e8a3654-e606-49a7-a6a4-96faad668f65 -a 10.0.0.2 -s 4420 -i 4 00:12:32.439 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:32.439 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:32.439 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.439 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:12:32.439 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:12:32.439 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:34.968 [ 0]:0x2 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75770506f089438f9496cba2202ff9cc 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75770506f089438f9496cba2202ff9cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:34.968 [ 0]:0x1 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1e6b13aaa9d841789ebe98d11c7de176 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1e6b13aaa9d841789ebe98d11c7de176 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.968 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:34.969 [ 1]:0x2 00:12:35.226 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.226 09:03:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.226 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75770506f089438f9496cba2202ff9cc 00:12:35.226 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75770506f089438f9496cba2202ff9cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.226 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.485 [ 0]:0x2 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75770506f089438f9496cba2202ff9cc 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75770506f089438f9496cba2202ff9cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.485 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:35.743 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:35.743 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9e8a3654-e606-49a7-a6a4-96faad668f65 -a 10.0.0.2 -s 4420 -i 4 00:12:36.001 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:36.001 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:36.001 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.001 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:12:36.001 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:12:36.001 09:03:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:38.527 [ 0]:0x1 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:38.527 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1e6b13aaa9d841789ebe98d11c7de176 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1e6b13aaa9d841789ebe98d11c7de176 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:38.528 [ 1]:0x2 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75770506f089438f9496cba2202ff9cc 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75770506f089438f9496cba2202ff9cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:38.528 [ 0]:0x2 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75770506f089438f9496cba2202ff9cc 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75770506f089438f9496cba2202ff9cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:38.528 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:39.093 [2024-11-20 09:03:15.823162] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:39.093 request: 00:12:39.093 { 00:12:39.093 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:39.093 "nsid": 2, 00:12:39.093 "host": "nqn.2016-06.io.spdk:host1", 00:12:39.093 "method": "nvmf_ns_remove_host", 00:12:39.093 "req_id": 1 00:12:39.093 } 00:12:39.093 Got JSON-RPC error response 00:12:39.093 response: 00:12:39.093 { 00:12:39.093 "code": -32602, 00:12:39.093 "message": "Invalid parameters" 00:12:39.093 } 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.093 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.094 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:39.094 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.094 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.094 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.094 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:39.094 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.094 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.094 [ 0]:0x2 00:12:39.094 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.094 09:03:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.094 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=75770506f089438f9496cba2202ff9cc 00:12:39.094 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 75770506f089438f9496cba2202ff9cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.094 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:39.094 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.352 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3298065 00:12:39.352 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:39.352 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.352 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3298065 /var/tmp/host.sock 00:12:39.352 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 3298065 ']' 00:12:39.352 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:12:39.352 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:39.352 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:39.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:39.353 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:39.353 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:39.353 [2024-11-20 09:03:16.201116] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:12:39.353 [2024-11-20 09:03:16.201213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3298065 ] 00:12:39.353 [2024-11-20 09:03:16.268179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.353 [2024-11-20 09:03:16.325298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.611 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:39.611 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:39.611 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.868 09:03:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:40.126 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2d711de5-f574-4e93-9a22-c0d3d443042d 00:12:40.126 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:40.126 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2D711DE5F5744E939A22C0D3D443042D -i 00:12:40.691 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 81764b79-8421-4ee5-9467-3ddb3d04400b 00:12:40.691 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:40.691 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 81764B7984214EE594673DDB3D04400B -i 00:12:40.691 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:40.949 09:03:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:41.206 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:41.206 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:41.772 nvme0n1 00:12:41.772 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:41.772 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:42.336 nvme1n2 00:12:42.336 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:42.336 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:42.336 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:42.336 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:42.336 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:42.593 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:42.593 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:42.593 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:42.593 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:42.850 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2d711de5-f574-4e93-9a22-c0d3d443042d == \2\d\7\1\1\d\e\5\-\f\5\7\4\-\4\e\9\3\-\9\a\2\2\-\c\0\d\3\d\4\4\3\0\4\2\d ]] 00:12:42.850 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:42.850 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:42.850 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:43.108 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 81764b79-8421-4ee5-9467-3ddb3d04400b == \8\1\7\6\4\b\7\9\-\8\4\2\1\-\4\e\e\5\-\9\4\6\7\-\3\d\d\b\3\d\0\4\4\0\0\b ]] 00:12:43.108 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.366 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2d711de5-f574-4e93-9a22-c0d3d443042d 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2D711DE5F5744E939A22C0D3D443042D 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2D711DE5F5744E939A22C0D3D443042D 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.623 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:43.624 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2D711DE5F5744E939A22C0D3D443042D 00:12:43.881 [2024-11-20 09:03:20.877945] bdev.c:8432:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:43.881 [2024-11-20 09:03:20.877981] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:43.881 [2024-11-20 09:03:20.878003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.881 request: 00:12:43.881 { 00:12:43.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.881 "namespace": { 00:12:43.881 "bdev_name": "invalid", 00:12:43.881 "nsid": 1, 00:12:43.881 "nguid": "2D711DE5F5744E939A22C0D3D443042D", 00:12:43.881 "no_auto_visible": false 00:12:43.881 }, 00:12:43.881 "method": "nvmf_subsystem_add_ns", 00:12:43.881 "req_id": 1 00:12:43.881 } 00:12:43.881 Got JSON-RPC error response 00:12:43.881 response: 00:12:43.881 { 00:12:43.881 "code": -32602, 00:12:43.881 "message": "Invalid parameters" 00:12:43.881 } 00:12:43.881 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:43.881 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.881 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.881 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.881 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2d711de5-f574-4e93-9a22-c0d3d443042d 00:12:43.881 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:43.881 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2D711DE5F5744E939A22C0D3D443042D -i 00:12:44.139 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3298065 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3298065 ']' 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3298065 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3298065 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3298065' 00:12:46.666 killing process with pid 3298065 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3298065 00:12:46.666 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3298065 00:12:46.924 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.182 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:47.182 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:47.182 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.182 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:47.182 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.182 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:47.182 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.182 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.182 rmmod nvme_tcp 00:12:47.182 rmmod nvme_fabrics 00:12:47.440 rmmod nvme_keyring 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3295934 ']' 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3295934 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 3295934 ']' 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 3295934 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3295934 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3295934' 00:12:47.440 killing process with pid 3295934 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 3295934 00:12:47.440 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 3295934 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.699 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.603 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:49.603 00:12:49.603 real 0m25.400s 00:12:49.603 user 0m36.824s 00:12:49.603 sys 0m4.722s 00:12:49.603 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.603 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:49.603 ************************************ 00:12:49.603 END TEST nvmf_ns_masking 00:12:49.603 ************************************ 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.862 ************************************ 00:12:49.862 START TEST nvmf_nvme_cli 00:12:49.862 ************************************ 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:49.862 * Looking for test storage... 00:12:49.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:49.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.862 --rc genhtml_branch_coverage=1 00:12:49.862 --rc genhtml_function_coverage=1 00:12:49.862 --rc genhtml_legend=1 00:12:49.862 --rc geninfo_all_blocks=1 00:12:49.862 --rc geninfo_unexecuted_blocks=1 00:12:49.862 00:12:49.862 ' 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:49.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.862 --rc genhtml_branch_coverage=1 00:12:49.862 --rc genhtml_function_coverage=1 00:12:49.862 --rc genhtml_legend=1 00:12:49.862 --rc geninfo_all_blocks=1 00:12:49.862 --rc geninfo_unexecuted_blocks=1 00:12:49.862 00:12:49.862 ' 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:49.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.862 --rc genhtml_branch_coverage=1 00:12:49.862 --rc genhtml_function_coverage=1 00:12:49.862 --rc genhtml_legend=1 00:12:49.862 --rc geninfo_all_blocks=1 00:12:49.862 --rc geninfo_unexecuted_blocks=1 00:12:49.862 00:12:49.862 ' 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:49.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.862 --rc genhtml_branch_coverage=1 00:12:49.862 --rc genhtml_function_coverage=1 00:12:49.862 --rc genhtml_legend=1 00:12:49.862 --rc geninfo_all_blocks=1 00:12:49.862 --rc geninfo_unexecuted_blocks=1 00:12:49.862 00:12:49.862 ' 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.862 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:49.863 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:52.396 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:52.396 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.396 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:52.397 Found net devices under 0000:09:00.0: cvl_0_0 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:52.397 Found net devices under 0000:09:00.1: cvl_0_1 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.397 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:12:52.397 00:12:52.397 --- 10.0.0.2 ping statistics --- 00:12:52.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.397 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:12:52.397 00:12:52.397 --- 10.0.0.1 ping statistics --- 00:12:52.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.397 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3300985 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3300985 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 3300985 ']' 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.397 [2024-11-20 09:03:29.147369] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:12:52.397 [2024-11-20 09:03:29.147450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.397 [2024-11-20 09:03:29.221342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.397 [2024-11-20 09:03:29.283780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.397 [2024-11-20 09:03:29.283833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.397 [2024-11-20 09:03:29.283855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.397 [2024-11-20 09:03:29.283871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.397 [2024-11-20 09:03:29.283885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.397 [2024-11-20 09:03:29.285515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.397 [2024-11-20 09:03:29.285572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.397 [2024-11-20 09:03:29.285638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.397 [2024-11-20 09:03:29.285641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.397 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 [2024-11-20 09:03:29.441787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 Malloc0 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 Malloc1 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 [2024-11-20 09:03:29.544413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.656 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:12:52.913 00:12:52.913 Discovery Log Number of Records 2, Generation counter 2 00:12:52.913 =====Discovery Log Entry 0====== 00:12:52.913 trtype: tcp 00:12:52.913 adrfam: ipv4 00:12:52.913 subtype: current discovery subsystem 00:12:52.913 treq: not required 00:12:52.913 portid: 0 00:12:52.913 trsvcid: 4420 00:12:52.913 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:52.913 traddr: 10.0.0.2 00:12:52.913 eflags: explicit discovery connections, duplicate discovery information 00:12:52.913 sectype: none 00:12:52.913 =====Discovery Log Entry 1====== 00:12:52.913 trtype: tcp 00:12:52.913 adrfam: ipv4 00:12:52.913 subtype: nvme subsystem 00:12:52.913 treq: not required 00:12:52.913 portid: 0 00:12:52.913 trsvcid: 4420 00:12:52.913 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:52.913 traddr: 10.0.0.2 00:12:52.913 eflags: none 00:12:52.913 sectype: none 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:52.913 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.477 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:53.477 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:12:53.477 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.477 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:12:53.478 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:12:53.478 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:56.006 /dev/nvme0n2 ]] 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:12:56.006 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:56.006 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.263 rmmod nvme_tcp 00:12:56.263 rmmod nvme_fabrics 00:12:56.263 rmmod nvme_keyring 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3300985 ']' 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3300985 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 3300985 ']' 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 3300985 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3300985 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3300985' 00:12:56.263 killing process with pid 3300985 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 3300985 00:12:56.263 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 3300985 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.520 09:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.463 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.463 00:12:58.463 real 0m8.784s 00:12:58.463 user 0m16.994s 00:12:58.463 sys 0m2.348s 00:12:58.463 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:58.463 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.463 ************************************ 00:12:58.463 END TEST nvmf_nvme_cli 00:12:58.464 ************************************ 00:12:58.464 09:03:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:58.464 09:03:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:58.464 09:03:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:58.464 09:03:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:58.464 09:03:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.722 ************************************ 00:12:58.722 START TEST nvmf_vfio_user 00:12:58.722 ************************************ 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:58.722 * Looking for test storage... 00:12:58.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.722 --rc genhtml_branch_coverage=1 00:12:58.722 --rc genhtml_function_coverage=1 00:12:58.722 --rc genhtml_legend=1 00:12:58.722 --rc geninfo_all_blocks=1 00:12:58.722 --rc geninfo_unexecuted_blocks=1 00:12:58.722 00:12:58.722 ' 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.722 --rc genhtml_branch_coverage=1 00:12:58.722 --rc genhtml_function_coverage=1 00:12:58.722 --rc genhtml_legend=1 00:12:58.722 --rc geninfo_all_blocks=1 00:12:58.722 --rc geninfo_unexecuted_blocks=1 00:12:58.722 00:12:58.722 ' 00:12:58.722 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.722 --rc genhtml_branch_coverage=1 00:12:58.722 --rc genhtml_function_coverage=1 00:12:58.722 --rc genhtml_legend=1 00:12:58.722 --rc geninfo_all_blocks=1 00:12:58.722 --rc geninfo_unexecuted_blocks=1 00:12:58.722 00:12:58.723 ' 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:58.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.723 --rc genhtml_branch_coverage=1 00:12:58.723 --rc genhtml_function_coverage=1 00:12:58.723 --rc genhtml_legend=1 00:12:58.723 --rc geninfo_all_blocks=1 00:12:58.723 --rc geninfo_unexecuted_blocks=1 00:12:58.723 00:12:58.723 ' 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3301926 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3301926' 00:12:58.723 Process pid: 3301926 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3301926 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3301926 ']' 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:58.723 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:58.723 [2024-11-20 09:03:35.720725] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:12:58.723 [2024-11-20 09:03:35.720810] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.981 [2024-11-20 09:03:35.791753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.981 [2024-11-20 09:03:35.852090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.981 [2024-11-20 09:03:35.852140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.981 [2024-11-20 09:03:35.852163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.981 [2024-11-20 09:03:35.852181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.981 [2024-11-20 09:03:35.852196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.981 [2024-11-20 09:03:35.853878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.981 [2024-11-20 09:03:35.853946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.981 [2024-11-20 09:03:35.854014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.981 [2024-11-20 09:03:35.854017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.981 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:58.981 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:12:58.981 09:03:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:00.352 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:00.352 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:00.352 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:00.352 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:00.352 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:00.352 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:00.610 Malloc1 00:13:00.610 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:00.902 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:01.184 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:01.441 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.441 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:01.441 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:01.699 Malloc2 00:13:01.699 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:01.957 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:02.214 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:02.471 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:02.471 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:02.471 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:02.471 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:02.471 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:02.471 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:02.471 [2024-11-20 09:03:39.481375] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:13:02.471 [2024-11-20 09:03:39.481417] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302344 ] 00:13:02.730 [2024-11-20 09:03:39.530551] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:02.730 [2024-11-20 09:03:39.535998] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:02.730 [2024-11-20 09:03:39.536030] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f32ca376000 00:13:02.730 [2024-11-20 09:03:39.536994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.730 [2024-11-20 09:03:39.537991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.730 [2024-11-20 09:03:39.538995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.730 [2024-11-20 09:03:39.540003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.730 [2024-11-20 09:03:39.541008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.730 [2024-11-20 09:03:39.542014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.730 [2024-11-20 09:03:39.543022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.730 [2024-11-20 09:03:39.544029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.730 [2024-11-20 09:03:39.545033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:02.730 [2024-11-20 09:03:39.545059] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f32ca36b000 00:13:02.730 [2024-11-20 09:03:39.546176] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:02.730 [2024-11-20 09:03:39.565787] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:02.730 [2024-11-20 09:03:39.565847] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:02.730 [2024-11-20 09:03:39.568164] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:02.730 [2024-11-20 09:03:39.568221] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:02.730 [2024-11-20 09:03:39.568335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:02.730 [2024-11-20 09:03:39.568380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:02.730 [2024-11-20 09:03:39.568393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:02.730 [2024-11-20 09:03:39.569158] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:02.730 [2024-11-20 09:03:39.569179] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:02.730 [2024-11-20 09:03:39.569192] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:02.730 [2024-11-20 09:03:39.570162] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:02.730 [2024-11-20 09:03:39.570183] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:02.731 [2024-11-20 09:03:39.570197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:02.731 [2024-11-20 09:03:39.571163] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:02.731 [2024-11-20 09:03:39.571182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:02.731 [2024-11-20 09:03:39.572169] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:02.731 [2024-11-20 09:03:39.572187] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:02.731 [2024-11-20 09:03:39.572196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:02.731 [2024-11-20 09:03:39.572207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:02.731 [2024-11-20 09:03:39.572318] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:02.731 [2024-11-20 09:03:39.572329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:02.731 [2024-11-20 09:03:39.572338] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:02.731 [2024-11-20 09:03:39.573182] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:02.731 [2024-11-20 09:03:39.574185] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:02.731 [2024-11-20 09:03:39.575188] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:02.731 [2024-11-20 09:03:39.576185] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.731 [2024-11-20 09:03:39.576335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:02.731 [2024-11-20 09:03:39.577202] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:02.731 [2024-11-20 09:03:39.577224] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:02.731 [2024-11-20 09:03:39.577234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577258] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:02.731 [2024-11-20 09:03:39.577280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577335] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.731 [2024-11-20 09:03:39.577349] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.731 [2024-11-20 09:03:39.577356] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.731 [2024-11-20 09:03:39.577378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.731 [2024-11-20 09:03:39.577441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:02.731 [2024-11-20 09:03:39.577461] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:02.731 [2024-11-20 09:03:39.577469] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:02.731 [2024-11-20 09:03:39.577476] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:02.731 [2024-11-20 09:03:39.577485] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:02.731 [2024-11-20 09:03:39.577500] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:02.731 [2024-11-20 09:03:39.577509] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:02.731 [2024-11-20 09:03:39.577517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:02.731 [2024-11-20 09:03:39.577566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:02.731 [2024-11-20 09:03:39.577584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.731 [2024-11-20 09:03:39.577617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.731 [2024-11-20 09:03:39.577630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.731 [2024-11-20 09:03:39.577642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.731 [2024-11-20 09:03:39.577651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577675] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:02.731 [2024-11-20 09:03:39.577691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:02.731 [2024-11-20 09:03:39.577706] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:02.731 [2024-11-20 09:03:39.577716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:02.731 [2024-11-20 09:03:39.577763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:02.731 [2024-11-20 09:03:39.577830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577860] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:02.731 [2024-11-20 09:03:39.577868] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:02.731 [2024-11-20 09:03:39.577874] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.731 [2024-11-20 09:03:39.577883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:02.731 [2024-11-20 09:03:39.577897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:02.731 [2024-11-20 09:03:39.577916] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:02.731 [2024-11-20 09:03:39.577938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.577965] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.731 [2024-11-20 09:03:39.577973] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.731 [2024-11-20 09:03:39.577979] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.731 [2024-11-20 09:03:39.577992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.731 [2024-11-20 09:03:39.578015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:02.731 [2024-11-20 09:03:39.578041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.578055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.578067] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.731 [2024-11-20 09:03:39.578075] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.731 [2024-11-20 09:03:39.578081] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.731 [2024-11-20 09:03:39.578090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.731 [2024-11-20 09:03:39.578104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:02.731 [2024-11-20 09:03:39.578118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.578129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.578143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.578154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.578162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.578170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:02.731 [2024-11-20 09:03:39.578179] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:02.732 [2024-11-20 09:03:39.578186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:02.732 [2024-11-20 09:03:39.578194] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:02.732 [2024-11-20 09:03:39.578223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:02.732 [2024-11-20 09:03:39.578242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:02.732 [2024-11-20 09:03:39.578261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:02.732 [2024-11-20 09:03:39.578274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:02.732 [2024-11-20 09:03:39.578317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:02.732 [2024-11-20 09:03:39.578333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:02.732 [2024-11-20 09:03:39.578350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:02.732 [2024-11-20 09:03:39.578367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:02.732 [2024-11-20 09:03:39.578392] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:02.732 [2024-11-20 09:03:39.578403] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:02.732 [2024-11-20 09:03:39.578409] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:02.732 [2024-11-20 09:03:39.578415] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:02.732 [2024-11-20 09:03:39.578421] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:02.732 [2024-11-20 09:03:39.578430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:02.732 [2024-11-20 09:03:39.578442] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:02.732 [2024-11-20 09:03:39.578450] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:02.732 [2024-11-20 09:03:39.578456] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.732 [2024-11-20 09:03:39.578465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:02.732 [2024-11-20 09:03:39.578477] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:02.732 [2024-11-20 09:03:39.578484] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.732 [2024-11-20 09:03:39.578490] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.732 [2024-11-20 09:03:39.578499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.732 [2024-11-20 09:03:39.578511] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:02.732 [2024-11-20 09:03:39.578519] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:02.732 [2024-11-20 09:03:39.578525] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.732 [2024-11-20 09:03:39.578533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:02.732 [2024-11-20 09:03:39.578545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:02.732 [2024-11-20 09:03:39.578567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:02.732 [2024-11-20 09:03:39.578588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:02.732 [2024-11-20 09:03:39.578602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:02.732 ===================================================== 00:13:02.732 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:02.732 ===================================================== 00:13:02.732 Controller Capabilities/Features 00:13:02.732 ================================ 00:13:02.732 Vendor ID: 4e58 00:13:02.732 Subsystem Vendor ID: 4e58 00:13:02.732 Serial Number: SPDK1 00:13:02.732 Model Number: SPDK bdev Controller 00:13:02.732 Firmware Version: 25.01 00:13:02.732 Recommended Arb Burst: 6 00:13:02.732 IEEE OUI Identifier: 8d 6b 50 00:13:02.732 Multi-path I/O 00:13:02.732 May have multiple subsystem ports: Yes 00:13:02.732 May have multiple controllers: Yes 00:13:02.732 Associated with SR-IOV VF: No 00:13:02.732 Max Data Transfer Size: 131072 00:13:02.732 Max Number of Namespaces: 32 00:13:02.732 Max Number of I/O Queues: 127 00:13:02.732 NVMe Specification Version (VS): 1.3 00:13:02.732 NVMe Specification Version (Identify): 1.3 00:13:02.732 Maximum Queue Entries: 256 00:13:02.732 Contiguous Queues Required: Yes 00:13:02.732 Arbitration Mechanisms Supported 00:13:02.732 Weighted Round Robin: Not Supported 00:13:02.732 Vendor Specific: Not Supported 00:13:02.732 Reset Timeout: 15000 ms 00:13:02.732 Doorbell Stride: 4 bytes 00:13:02.732 NVM Subsystem Reset: Not Supported 00:13:02.732 Command Sets Supported 00:13:02.732 NVM Command Set: Supported 00:13:02.732 Boot Partition: Not Supported 00:13:02.732 Memory Page Size Minimum: 4096 bytes 00:13:02.732 Memory Page Size Maximum: 4096 bytes 00:13:02.732 Persistent Memory Region: Not Supported 00:13:02.732 Optional Asynchronous Events Supported 00:13:02.732 Namespace Attribute Notices: Supported 00:13:02.732 Firmware Activation Notices: Not Supported 00:13:02.732 ANA Change Notices: Not Supported 00:13:02.732 PLE Aggregate Log Change Notices: Not Supported 00:13:02.732 LBA Status Info Alert Notices: Not Supported 00:13:02.732 EGE Aggregate Log Change Notices: Not Supported 00:13:02.732 Normal NVM Subsystem Shutdown event: Not Supported 00:13:02.732 Zone Descriptor Change Notices: Not Supported 00:13:02.732 Discovery Log Change Notices: Not Supported 00:13:02.732 Controller Attributes 00:13:02.732 128-bit Host Identifier: Supported 00:13:02.732 Non-Operational Permissive Mode: Not Supported 00:13:02.732 NVM Sets: Not Supported 00:13:02.732 Read Recovery Levels: Not Supported 00:13:02.732 Endurance Groups: Not Supported 00:13:02.732 Predictable Latency Mode: Not Supported 00:13:02.732 Traffic Based Keep ALive: Not Supported 00:13:02.732 Namespace Granularity: Not Supported 00:13:02.732 SQ Associations: Not Supported 00:13:02.732 UUID List: Not Supported 00:13:02.732 Multi-Domain Subsystem: Not Supported 00:13:02.732 Fixed Capacity Management: Not Supported 00:13:02.732 Variable Capacity Management: Not Supported 00:13:02.732 Delete Endurance Group: Not Supported 00:13:02.732 Delete NVM Set: Not Supported 00:13:02.732 Extended LBA Formats Supported: Not Supported 00:13:02.732 Flexible Data Placement Supported: Not Supported 00:13:02.732 00:13:02.732 Controller Memory Buffer Support 00:13:02.732 ================================ 00:13:02.732 Supported: No 00:13:02.732 00:13:02.732 Persistent Memory Region Support 00:13:02.732 ================================ 00:13:02.732 Supported: No 00:13:02.732 00:13:02.732 Admin Command Set Attributes 00:13:02.732 ============================ 00:13:02.732 Security Send/Receive: Not Supported 00:13:02.732 Format NVM: Not Supported 00:13:02.732 Firmware Activate/Download: Not Supported 00:13:02.732 Namespace Management: Not Supported 00:13:02.732 Device Self-Test: Not Supported 00:13:02.732 Directives: Not Supported 00:13:02.732 NVMe-MI: Not Supported 00:13:02.732 Virtualization Management: Not Supported 00:13:02.732 Doorbell Buffer Config: Not Supported 00:13:02.732 Get LBA Status Capability: Not Supported 00:13:02.732 Command & Feature Lockdown Capability: Not Supported 00:13:02.732 Abort Command Limit: 4 00:13:02.732 Async Event Request Limit: 4 00:13:02.732 Number of Firmware Slots: N/A 00:13:02.732 Firmware Slot 1 Read-Only: N/A 00:13:02.732 Firmware Activation Without Reset: N/A 00:13:02.732 Multiple Update Detection Support: N/A 00:13:02.732 Firmware Update Granularity: No Information Provided 00:13:02.732 Per-Namespace SMART Log: No 00:13:02.732 Asymmetric Namespace Access Log Page: Not Supported 00:13:02.732 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:02.732 Command Effects Log Page: Supported 00:13:02.732 Get Log Page Extended Data: Supported 00:13:02.732 Telemetry Log Pages: Not Supported 00:13:02.732 Persistent Event Log Pages: Not Supported 00:13:02.732 Supported Log Pages Log Page: May Support 00:13:02.732 Commands Supported & Effects Log Page: Not Supported 00:13:02.732 Feature Identifiers & Effects Log Page:May Support 00:13:02.732 NVMe-MI Commands & Effects Log Page: May Support 00:13:02.732 Data Area 4 for Telemetry Log: Not Supported 00:13:02.732 Error Log Page Entries Supported: 128 00:13:02.732 Keep Alive: Supported 00:13:02.732 Keep Alive Granularity: 10000 ms 00:13:02.732 00:13:02.732 NVM Command Set Attributes 00:13:02.732 ========================== 00:13:02.732 Submission Queue Entry Size 00:13:02.732 Max: 64 00:13:02.732 Min: 64 00:13:02.732 Completion Queue Entry Size 00:13:02.732 Max: 16 00:13:02.732 Min: 16 00:13:02.732 Number of Namespaces: 32 00:13:02.732 Compare Command: Supported 00:13:02.732 Write Uncorrectable Command: Not Supported 00:13:02.732 Dataset Management Command: Supported 00:13:02.732 Write Zeroes Command: Supported 00:13:02.733 Set Features Save Field: Not Supported 00:13:02.733 Reservations: Not Supported 00:13:02.733 Timestamp: Not Supported 00:13:02.733 Copy: Supported 00:13:02.733 Volatile Write Cache: Present 00:13:02.733 Atomic Write Unit (Normal): 1 00:13:02.733 Atomic Write Unit (PFail): 1 00:13:02.733 Atomic Compare & Write Unit: 1 00:13:02.733 Fused Compare & Write: Supported 00:13:02.733 Scatter-Gather List 00:13:02.733 SGL Command Set: Supported (Dword aligned) 00:13:02.733 SGL Keyed: Not Supported 00:13:02.733 SGL Bit Bucket Descriptor: Not Supported 00:13:02.733 SGL Metadata Pointer: Not Supported 00:13:02.733 Oversized SGL: Not Supported 00:13:02.733 SGL Metadata Address: Not Supported 00:13:02.733 SGL Offset: Not Supported 00:13:02.733 Transport SGL Data Block: Not Supported 00:13:02.733 Replay Protected Memory Block: Not Supported 00:13:02.733 00:13:02.733 Firmware Slot Information 00:13:02.733 ========================= 00:13:02.733 Active slot: 1 00:13:02.733 Slot 1 Firmware Revision: 25.01 00:13:02.733 00:13:02.733 00:13:02.733 Commands Supported and Effects 00:13:02.733 ============================== 00:13:02.733 Admin Commands 00:13:02.733 -------------- 00:13:02.733 Get Log Page (02h): Supported 00:13:02.733 Identify (06h): Supported 00:13:02.733 Abort (08h): Supported 00:13:02.733 Set Features (09h): Supported 00:13:02.733 Get Features (0Ah): Supported 00:13:02.733 Asynchronous Event Request (0Ch): Supported 00:13:02.733 Keep Alive (18h): Supported 00:13:02.733 I/O Commands 00:13:02.733 ------------ 00:13:02.733 Flush (00h): Supported LBA-Change 00:13:02.733 Write (01h): Supported LBA-Change 00:13:02.733 Read (02h): Supported 00:13:02.733 Compare (05h): Supported 00:13:02.733 Write Zeroes (08h): Supported LBA-Change 00:13:02.733 Dataset Management (09h): Supported LBA-Change 00:13:02.733 Copy (19h): Supported LBA-Change 00:13:02.733 00:13:02.733 Error Log 00:13:02.733 ========= 00:13:02.733 00:13:02.733 Arbitration 00:13:02.733 =========== 00:13:02.733 Arbitration Burst: 1 00:13:02.733 00:13:02.733 Power Management 00:13:02.733 ================ 00:13:02.733 Number of Power States: 1 00:13:02.733 Current Power State: Power State #0 00:13:02.733 Power State #0: 00:13:02.733 Max Power: 0.00 W 00:13:02.733 Non-Operational State: Operational 00:13:02.733 Entry Latency: Not Reported 00:13:02.733 Exit Latency: Not Reported 00:13:02.733 Relative Read Throughput: 0 00:13:02.733 Relative Read Latency: 0 00:13:02.733 Relative Write Throughput: 0 00:13:02.733 Relative Write Latency: 0 00:13:02.733 Idle Power: Not Reported 00:13:02.733 Active Power: Not Reported 00:13:02.733 Non-Operational Permissive Mode: Not Supported 00:13:02.733 00:13:02.733 Health Information 00:13:02.733 ================== 00:13:02.733 Critical Warnings: 00:13:02.733 Available Spare Space: OK 00:13:02.733 Temperature: OK 00:13:02.733 Device Reliability: OK 00:13:02.733 Read Only: No 00:13:02.733 Volatile Memory Backup: OK 00:13:02.733 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:02.733 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:02.733 Available Spare: 0% 00:13:02.733 Available Sp[2024-11-20 09:03:39.578750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:02.733 [2024-11-20 09:03:39.578767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:02.733 [2024-11-20 09:03:39.578813] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:02.733 [2024-11-20 09:03:39.578831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.733 [2024-11-20 09:03:39.578843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.733 [2024-11-20 09:03:39.578856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.733 [2024-11-20 09:03:39.578866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.733 [2024-11-20 09:03:39.579210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:02.733 [2024-11-20 09:03:39.579232] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:02.733 [2024-11-20 09:03:39.580203] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.733 [2024-11-20 09:03:39.580298] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:02.733 [2024-11-20 09:03:39.580324] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:02.733 [2024-11-20 09:03:39.581222] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:02.733 [2024-11-20 09:03:39.581245] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:02.733 [2024-11-20 09:03:39.581323] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:02.733 [2024-11-20 09:03:39.586316] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:02.733 are Threshold: 0% 00:13:02.733 Life Percentage Used: 0% 00:13:02.733 Data Units Read: 0 00:13:02.733 Data Units Written: 0 00:13:02.733 Host Read Commands: 0 00:13:02.733 Host Write Commands: 0 00:13:02.733 Controller Busy Time: 0 minutes 00:13:02.733 Power Cycles: 0 00:13:02.733 Power On Hours: 0 hours 00:13:02.733 Unsafe Shutdowns: 0 00:13:02.733 Unrecoverable Media Errors: 0 00:13:02.733 Lifetime Error Log Entries: 0 00:13:02.733 Warning Temperature Time: 0 minutes 00:13:02.733 Critical Temperature Time: 0 minutes 00:13:02.733 00:13:02.733 Number of Queues 00:13:02.733 ================ 00:13:02.733 Number of I/O Submission Queues: 127 00:13:02.733 Number of I/O Completion Queues: 127 00:13:02.733 00:13:02.733 Active Namespaces 00:13:02.733 ================= 00:13:02.733 Namespace ID:1 00:13:02.733 Error Recovery Timeout: Unlimited 00:13:02.733 Command Set Identifier: NVM (00h) 00:13:02.733 Deallocate: Supported 00:13:02.733 Deallocated/Unwritten Error: Not Supported 00:13:02.733 Deallocated Read Value: Unknown 00:13:02.733 Deallocate in Write Zeroes: Not Supported 00:13:02.733 Deallocated Guard Field: 0xFFFF 00:13:02.733 Flush: Supported 00:13:02.733 Reservation: Supported 00:13:02.733 Namespace Sharing Capabilities: Multiple Controllers 00:13:02.733 Size (in LBAs): 131072 (0GiB) 00:13:02.733 Capacity (in LBAs): 131072 (0GiB) 00:13:02.733 Utilization (in LBAs): 131072 (0GiB) 00:13:02.733 NGUID: 7E635452542C41E2A6EBB8F66401E07B 00:13:02.733 UUID: 7e635452-542c-41e2-a6eb-b8f66401e07b 00:13:02.733 Thin Provisioning: Not Supported 00:13:02.733 Per-NS Atomic Units: Yes 00:13:02.733 Atomic Boundary Size (Normal): 0 00:13:02.733 Atomic Boundary Size (PFail): 0 00:13:02.733 Atomic Boundary Offset: 0 00:13:02.733 Maximum Single Source Range Length: 65535 00:13:02.733 Maximum Copy Length: 65535 00:13:02.733 Maximum Source Range Count: 1 00:13:02.733 NGUID/EUI64 Never Reused: No 00:13:02.733 Namespace Write Protected: No 00:13:02.733 Number of LBA Formats: 1 00:13:02.733 Current LBA Format: LBA Format #00 00:13:02.733 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:02.733 00:13:02.733 09:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:02.991 [2024-11-20 09:03:39.849283] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:08.251 Initializing NVMe Controllers 00:13:08.251 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:08.251 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:08.251 Initialization complete. Launching workers. 00:13:08.251 ======================================================== 00:13:08.251 Latency(us) 00:13:08.251 Device Information : IOPS MiB/s Average min max 00:13:08.251 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32477.33 126.86 3940.68 1206.06 7608.49 00:13:08.251 ======================================================== 00:13:08.251 Total : 32477.33 126.86 3940.68 1206.06 7608.49 00:13:08.251 00:13:08.251 [2024-11-20 09:03:44.874539] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:08.251 09:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:08.251 [2024-11-20 09:03:45.136762] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:13.510 Initializing NVMe Controllers 00:13:13.510 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:13.510 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:13.510 Initialization complete. Launching workers. 00:13:13.510 ======================================================== 00:13:13.510 Latency(us) 00:13:13.510 Device Information : IOPS MiB/s Average min max 00:13:13.510 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16034.20 62.63 7991.10 4991.16 15355.39 00:13:13.510 ======================================================== 00:13:13.510 Total : 16034.20 62.63 7991.10 4991.16 15355.39 00:13:13.510 00:13:13.510 [2024-11-20 09:03:50.171963] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:13.510 09:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:13.510 [2024-11-20 09:03:50.395015] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:18.771 [2024-11-20 09:03:55.456589] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:18.771 Initializing NVMe Controllers 00:13:18.771 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.771 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.771 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:18.771 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:18.771 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:18.772 Initialization complete. Launching workers. 00:13:18.772 Starting thread on core 2 00:13:18.772 Starting thread on core 3 00:13:18.772 Starting thread on core 1 00:13:18.772 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:18.772 [2024-11-20 09:03:55.779774] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.051 [2024-11-20 09:03:58.856200] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.051 Initializing NVMe Controllers 00:13:22.051 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.051 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.051 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:22.051 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:22.051 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:22.051 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:22.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:22.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:22.051 Initialization complete. Launching workers. 00:13:22.051 Starting thread on core 1 with urgent priority queue 00:13:22.051 Starting thread on core 2 with urgent priority queue 00:13:22.051 Starting thread on core 3 with urgent priority queue 00:13:22.051 Starting thread on core 0 with urgent priority queue 00:13:22.051 SPDK bdev Controller (SPDK1 ) core 0: 5273.67 IO/s 18.96 secs/100000 ios 00:13:22.051 SPDK bdev Controller (SPDK1 ) core 1: 5222.33 IO/s 19.15 secs/100000 ios 00:13:22.051 SPDK bdev Controller (SPDK1 ) core 2: 4610.00 IO/s 21.69 secs/100000 ios 00:13:22.051 SPDK bdev Controller (SPDK1 ) core 3: 5446.67 IO/s 18.36 secs/100000 ios 00:13:22.051 ======================================================== 00:13:22.051 00:13:22.051 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:22.308 [2024-11-20 09:03:59.184825] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.308 Initializing NVMe Controllers 00:13:22.308 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.309 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.309 Namespace ID: 1 size: 0GB 00:13:22.309 Initialization complete. 00:13:22.309 INFO: using host memory buffer for IO 00:13:22.309 Hello world! 00:13:22.309 [2024-11-20 09:03:59.218475] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.309 09:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:22.565 [2024-11-20 09:03:59.533758] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:23.936 Initializing NVMe Controllers 00:13:23.936 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:23.936 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:23.936 Initialization complete. Launching workers. 00:13:23.936 submit (in ns) avg, min, max = 6539.6, 3565.6, 4014164.4 00:13:23.936 complete (in ns) avg, min, max = 27881.2, 2074.4, 4024905.6 00:13:23.936 00:13:23.936 Submit histogram 00:13:23.936 ================ 00:13:23.936 Range in us Cumulative Count 00:13:23.936 3.556 - 3.579: 0.3670% ( 47) 00:13:23.936 3.579 - 3.603: 3.5762% ( 411) 00:13:23.936 3.603 - 3.627: 9.3308% ( 737) 00:13:23.936 3.627 - 3.650: 19.5518% ( 1309) 00:13:23.936 3.650 - 3.674: 27.7192% ( 1046) 00:13:23.936 3.674 - 3.698: 34.1766% ( 827) 00:13:23.936 3.698 - 3.721: 38.8538% ( 599) 00:13:23.936 3.721 - 3.745: 43.2810% ( 567) 00:13:23.936 3.745 - 3.769: 48.3876% ( 654) 00:13:23.936 3.769 - 3.793: 52.9320% ( 582) 00:13:23.936 3.793 - 3.816: 56.5082% ( 458) 00:13:23.936 3.816 - 3.840: 59.8813% ( 432) 00:13:23.936 3.840 - 3.864: 64.4647% ( 587) 00:13:23.936 3.864 - 3.887: 70.2194% ( 737) 00:13:23.936 3.887 - 3.911: 75.6071% ( 690) 00:13:23.936 3.911 - 3.935: 79.8470% ( 543) 00:13:23.936 3.935 - 3.959: 82.4237% ( 330) 00:13:23.936 3.959 - 3.982: 84.3289% ( 244) 00:13:23.936 3.982 - 4.006: 86.3356% ( 257) 00:13:23.936 4.006 - 4.030: 87.8192% ( 190) 00:13:23.936 4.030 - 4.053: 89.2949% ( 189) 00:13:23.936 4.053 - 4.077: 90.7004% ( 180) 00:13:23.936 4.077 - 4.101: 91.8326% ( 145) 00:13:23.936 4.101 - 4.124: 92.9179% ( 139) 00:13:23.936 4.124 - 4.148: 94.0579% ( 146) 00:13:23.936 4.148 - 4.172: 94.8466% ( 101) 00:13:23.936 4.172 - 4.196: 95.2994% ( 58) 00:13:23.936 4.196 - 4.219: 95.7055% ( 52) 00:13:23.936 4.219 - 4.243: 96.0022% ( 38) 00:13:23.936 4.243 - 4.267: 96.1740% ( 22) 00:13:23.936 4.267 - 4.290: 96.3301% ( 20) 00:13:23.936 4.290 - 4.314: 96.4629% ( 17) 00:13:23.936 4.314 - 4.338: 96.5722% ( 14) 00:13:23.936 4.338 - 4.361: 96.6581% ( 11) 00:13:23.936 4.361 - 4.385: 96.7596% ( 13) 00:13:23.936 4.385 - 4.409: 96.8142% ( 7) 00:13:23.936 4.409 - 4.433: 96.8455% ( 4) 00:13:23.936 4.433 - 4.456: 96.8533% ( 1) 00:13:23.936 4.456 - 4.480: 96.8689% ( 2) 00:13:23.936 4.480 - 4.504: 96.8845% ( 2) 00:13:23.936 4.527 - 4.551: 96.8923% ( 1) 00:13:23.936 4.551 - 4.575: 96.9001% ( 1) 00:13:23.936 4.575 - 4.599: 96.9157% ( 2) 00:13:23.936 4.599 - 4.622: 96.9236% ( 1) 00:13:23.936 4.622 - 4.646: 96.9392% ( 2) 00:13:23.936 4.646 - 4.670: 96.9626% ( 3) 00:13:23.936 4.670 - 4.693: 97.0016% ( 5) 00:13:23.936 4.693 - 4.717: 97.0173% ( 2) 00:13:23.936 4.717 - 4.741: 97.0641% ( 6) 00:13:23.936 4.741 - 4.764: 97.1031% ( 5) 00:13:23.936 4.764 - 4.788: 97.1968% ( 12) 00:13:23.936 4.788 - 4.812: 97.2359% ( 5) 00:13:23.936 4.812 - 4.836: 97.3062% ( 9) 00:13:23.936 4.836 - 4.859: 97.3842% ( 10) 00:13:23.936 4.859 - 4.883: 97.4155% ( 4) 00:13:23.936 4.883 - 4.907: 97.4545% ( 5) 00:13:23.936 4.907 - 4.930: 97.5092% ( 7) 00:13:23.936 4.930 - 4.954: 97.5638% ( 7) 00:13:23.936 4.954 - 4.978: 97.5951% ( 4) 00:13:23.936 4.978 - 5.001: 97.6107% ( 2) 00:13:23.936 5.001 - 5.025: 97.6341% ( 3) 00:13:23.936 5.025 - 5.049: 97.6575% ( 3) 00:13:23.936 5.049 - 5.073: 97.6966% ( 5) 00:13:23.936 5.073 - 5.096: 97.7122% ( 2) 00:13:23.936 5.096 - 5.120: 97.7356% ( 3) 00:13:23.936 5.120 - 5.144: 97.7825% ( 6) 00:13:23.936 5.144 - 5.167: 97.7981% ( 2) 00:13:23.936 5.167 - 5.191: 97.8059% ( 1) 00:13:23.936 5.215 - 5.239: 97.8215% ( 2) 00:13:23.936 5.262 - 5.286: 97.8293% ( 1) 00:13:23.936 5.404 - 5.428: 97.8371% ( 1) 00:13:23.936 5.594 - 5.618: 97.8449% ( 1) 00:13:23.936 5.641 - 5.665: 97.8527% ( 1) 00:13:23.936 5.713 - 5.736: 97.8605% ( 1) 00:13:23.936 5.784 - 5.807: 97.8684% ( 1) 00:13:23.936 5.831 - 5.855: 97.8762% ( 1) 00:13:23.936 5.902 - 5.926: 97.8840% ( 1) 00:13:23.936 5.926 - 5.950: 97.8918% ( 1) 00:13:23.936 5.950 - 5.973: 97.8996% ( 1) 00:13:23.936 6.044 - 6.068: 97.9074% ( 1) 00:13:23.936 6.400 - 6.447: 97.9152% ( 1) 00:13:23.936 6.447 - 6.495: 97.9230% ( 1) 00:13:23.936 6.779 - 6.827: 97.9308% ( 1) 00:13:23.936 6.969 - 7.016: 97.9464% ( 2) 00:13:23.936 7.111 - 7.159: 97.9542% ( 1) 00:13:23.936 7.159 - 7.206: 97.9699% ( 2) 00:13:23.936 7.253 - 7.301: 97.9777% ( 1) 00:13:23.936 7.348 - 7.396: 98.0089% ( 4) 00:13:23.936 7.443 - 7.490: 98.0245% ( 2) 00:13:23.936 7.490 - 7.538: 98.0323% ( 1) 00:13:23.936 7.538 - 7.585: 98.0479% ( 2) 00:13:23.936 7.585 - 7.633: 98.0714% ( 3) 00:13:23.936 7.680 - 7.727: 98.0792% ( 1) 00:13:23.936 7.870 - 7.917: 98.0948% ( 2) 00:13:23.936 7.917 - 7.964: 98.1104% ( 2) 00:13:23.936 7.964 - 8.012: 98.1182% ( 1) 00:13:23.936 8.012 - 8.059: 98.1260% ( 1) 00:13:23.936 8.154 - 8.201: 98.1573% ( 4) 00:13:23.936 8.201 - 8.249: 98.1651% ( 1) 00:13:23.936 8.391 - 8.439: 98.1963% ( 4) 00:13:23.936 8.486 - 8.533: 98.2041% ( 1) 00:13:23.936 8.581 - 8.628: 98.2275% ( 3) 00:13:23.936 8.628 - 8.676: 98.2353% ( 1) 00:13:23.936 8.676 - 8.723: 98.2431% ( 1) 00:13:23.936 8.723 - 8.770: 98.2588% ( 2) 00:13:23.936 8.865 - 8.913: 98.2666% ( 1) 00:13:23.936 8.960 - 9.007: 98.2744% ( 1) 00:13:23.936 9.007 - 9.055: 98.2822% ( 1) 00:13:23.936 9.244 - 9.292: 98.2900% ( 1) 00:13:23.936 9.481 - 9.529: 98.2978% ( 1) 00:13:23.936 9.529 - 9.576: 98.3056% ( 1) 00:13:23.936 9.624 - 9.671: 98.3212% ( 2) 00:13:23.936 9.861 - 9.908: 98.3290% ( 1) 00:13:23.936 9.908 - 9.956: 98.3368% ( 1) 00:13:23.936 10.145 - 10.193: 98.3447% ( 1) 00:13:23.936 10.240 - 10.287: 98.3525% ( 1) 00:13:23.936 10.335 - 10.382: 98.3603% ( 1) 00:13:23.936 10.382 - 10.430: 98.3681% ( 1) 00:13:23.936 10.430 - 10.477: 98.3759% ( 1) 00:13:23.936 10.477 - 10.524: 98.3915% ( 2) 00:13:23.936 10.667 - 10.714: 98.3993% ( 1) 00:13:23.936 10.761 - 10.809: 98.4071% ( 1) 00:13:23.936 10.856 - 10.904: 98.4149% ( 1) 00:13:23.936 10.904 - 10.951: 98.4227% ( 1) 00:13:23.936 10.999 - 11.046: 98.4305% ( 1) 00:13:23.936 11.093 - 11.141: 98.4384% ( 1) 00:13:23.936 11.188 - 11.236: 98.4540% ( 2) 00:13:23.936 11.236 - 11.283: 98.4618% ( 1) 00:13:23.936 11.425 - 11.473: 98.4774% ( 2) 00:13:23.936 11.615 - 11.662: 98.4852% ( 1) 00:13:23.936 11.710 - 11.757: 98.4930% ( 1) 00:13:23.936 11.757 - 11.804: 98.5008% ( 1) 00:13:23.936 11.804 - 11.852: 98.5164% ( 2) 00:13:23.936 11.947 - 11.994: 98.5242% ( 1) 00:13:23.937 12.041 - 12.089: 98.5321% ( 1) 00:13:23.937 12.231 - 12.326: 98.5399% ( 1) 00:13:23.937 12.326 - 12.421: 98.5477% ( 1) 00:13:23.937 12.421 - 12.516: 98.5555% ( 1) 00:13:23.937 12.516 - 12.610: 98.5711% ( 2) 00:13:23.937 12.610 - 12.705: 98.5789% ( 1) 00:13:23.937 12.705 - 12.800: 98.5867% ( 1) 00:13:23.937 12.990 - 13.084: 98.5945% ( 1) 00:13:23.937 13.464 - 13.559: 98.6101% ( 2) 00:13:23.937 13.843 - 13.938: 98.6179% ( 1) 00:13:23.937 13.938 - 14.033: 98.6258% ( 1) 00:13:23.937 14.222 - 14.317: 98.6414% ( 2) 00:13:23.937 14.317 - 14.412: 98.6492% ( 1) 00:13:23.937 14.696 - 14.791: 98.6648% ( 2) 00:13:23.937 14.791 - 14.886: 98.6726% ( 1) 00:13:23.937 14.981 - 15.076: 98.6804% ( 1) 00:13:23.937 15.076 - 15.170: 98.6882% ( 1) 00:13:23.937 16.877 - 16.972: 98.6960% ( 1) 00:13:23.937 16.972 - 17.067: 98.7038% ( 1) 00:13:23.937 17.067 - 17.161: 98.7195% ( 2) 00:13:23.937 17.161 - 17.256: 98.7273% ( 1) 00:13:23.937 17.256 - 17.351: 98.7351% ( 1) 00:13:23.937 17.351 - 17.446: 98.7663% ( 4) 00:13:23.937 17.446 - 17.541: 98.8366% ( 9) 00:13:23.937 17.541 - 17.636: 98.8756% ( 5) 00:13:23.937 17.636 - 17.730: 98.9381% ( 8) 00:13:23.937 17.730 - 17.825: 98.9849% ( 6) 00:13:23.937 17.825 - 17.920: 99.0786% ( 12) 00:13:23.937 17.920 - 18.015: 99.1411% ( 8) 00:13:23.937 18.015 - 18.110: 99.2270% ( 11) 00:13:23.937 18.110 - 18.204: 99.2895% ( 8) 00:13:23.937 18.204 - 18.299: 99.3597% ( 9) 00:13:23.937 18.299 - 18.394: 99.3988% ( 5) 00:13:23.937 18.394 - 18.489: 99.5081% ( 14) 00:13:23.937 18.489 - 18.584: 99.6252% ( 15) 00:13:23.937 18.584 - 18.679: 99.6877% ( 8) 00:13:23.937 18.679 - 18.773: 99.7111% ( 3) 00:13:23.937 18.773 - 18.868: 99.7345% ( 3) 00:13:23.937 18.868 - 18.963: 99.7579% ( 3) 00:13:23.937 18.963 - 19.058: 99.7970% ( 5) 00:13:23.937 19.058 - 19.153: 99.8126% ( 2) 00:13:23.937 19.153 - 19.247: 99.8204% ( 1) 00:13:23.937 19.437 - 19.532: 99.8438% ( 3) 00:13:23.937 19.532 - 19.627: 99.8516% ( 1) 00:13:23.937 19.721 - 19.816: 99.8595% ( 1) 00:13:23.937 19.816 - 19.911: 99.8751% ( 2) 00:13:23.937 20.006 - 20.101: 99.8907% ( 2) 00:13:23.937 20.385 - 20.480: 99.8985% ( 1) 00:13:23.937 20.764 - 20.859: 99.9063% ( 1) 00:13:23.937 21.713 - 21.807: 99.9141% ( 1) 00:13:23.937 23.230 - 23.324: 99.9219% ( 1) 00:13:23.937 23.704 - 23.799: 99.9297% ( 1) 00:13:23.937 27.876 - 28.065: 99.9375% ( 1) 00:13:23.937 3980.705 - 4004.978: 99.9844% ( 6) 00:13:23.937 4004.978 - 4029.250: 100.0000% ( 2) 00:13:23.937 00:13:23.937 Complete histogram 00:13:23.937 ================== 00:13:23.937 Range in us Cumulative Count 00:13:23.937 2.074 - 2.086: 8.4407% ( 1081) 00:13:23.937 2.086 - 2.098: 33.4817% ( 3207) 00:13:23.937 2.098 - 2.110: 35.8164% ( 299) 00:13:23.937 2.110 - 2.121: 41.5866% ( 739) 00:13:23.937 2.121 - 2.133: 45.9827% ( 563) 00:13:23.937 2.133 - 2.145: 47.2710% ( 165) 00:13:23.937 2.145 - 2.157: 56.0943% ( 1130) 00:13:23.937 2.157 - 2.169: 64.4491% ( 1070) 00:13:23.937 2.169 - 2.181: 65.6516% ( 154) 00:13:23.937 2.181 - 2.193: 68.6109% ( 379) 00:13:23.937 2.193 - 2.204: 70.6645% ( 263) 00:13:23.937 2.204 - 2.216: 71.3750% ( 91) 00:13:23.937 2.216 - 2.228: 76.9189% ( 710) 00:13:23.937 2.228 - 2.240: 82.9624% ( 774) 00:13:23.937 2.240 - 2.252: 85.6563% ( 345) 00:13:23.937 2.252 - 2.264: 88.7171% ( 392) 00:13:23.937 2.264 - 2.276: 90.4505% ( 222) 00:13:23.937 2.276 - 2.287: 91.0049% ( 71) 00:13:23.937 2.287 - 2.299: 91.6999% ( 89) 00:13:23.937 2.299 - 2.311: 92.4807% ( 100) 00:13:23.937 2.311 - 2.323: 94.0501% ( 201) 00:13:23.937 2.323 - 2.335: 94.6201% ( 73) 00:13:23.937 2.335 - 2.347: 94.6982% ( 10) 00:13:23.937 2.347 - 2.359: 94.7997% ( 13) 00:13:23.937 2.359 - 2.370: 94.9168% ( 15) 00:13:23.937 2.370 - 2.382: 95.0574% ( 18) 00:13:23.937 2.382 - 2.394: 95.6352% ( 74) 00:13:23.937 2.394 - 2.406: 96.4629% ( 106) 00:13:23.937 2.406 - 2.418: 96.8142% ( 45) 00:13:23.937 2.418 - 2.430: 97.0016% ( 24) 00:13:23.937 2.430 - 2.441: 97.2125% ( 27) 00:13:23.937 2.441 - 2.453: 97.2905% ( 10) 00:13:23.937 2.453 - 2.465: 97.4779% ( 24) 00:13:23.937 2.465 - 2.477: 97.5638% ( 11) 00:13:23.937 2.477 - 2.489: 97.6810% ( 15) 00:13:23.937 2.489 - 2.501: 97.7903% ( 14) 00:13:23.937 2.501 - 2.513: 97.8371% ( 6) 00:13:23.937 2.513 - 2.524: 97.9152% ( 10) 00:13:23.937 2.524 - 2.536: 97.9386% ( 3) 00:13:23.937 2.536 - 2.548: 97.9855% ( 6) 00:13:23.937 2.548 - 2.560: 98.0167% ( 4) 00:13:23.937 2.560 - 2.572: 98.0401% ( 3) 00:13:23.937 2.572 - 2.584: 98.0792% ( 5) 00:13:23.937 2.584 - 2.596: 98.1026% ( 3) 00:13:23.937 2.596 - 2.607: 98.1104% ( 1) 00:13:23.937 2.619 - 2.631: 98.1182% ( 1) 00:13:23.937 2.631 - 2.643: 98.1260% ( 1) 00:13:23.937 2.643 - 2.655: 98.1416% ( 2) 00:13:23.937 2.655 - 2.667: 98.1494% ( 1) 00:13:23.937 2.679 - 2.690: 98.1573% ( 1) 00:13:23.937 2.690 - 2.702: 98.1651% ( 1) 00:13:23.937 2.714 - 2.726: 98.1729% ( 1) 00:13:23.937 2.773 - 2.785: 98.1807% ( 1) 00:13:23.937 2.785 - 2.797: 98.1885% ( 1) 00:13:23.937 2.809 - 2.821: 98.1963% ( 1) 00:13:23.937 2.868 - 2.880: 98.2119% ( 2) 00:13:23.937 2.939 - 2.951: 98.2197% ( 1) 00:13:23.937 2.951 - 2.963: 98.2275% ( 1) 00:13:23.937 3.200 - 3.224: 98.2353% ( 1) 00:13:23.937 3.366 - 3.390: 98.2510% ( 2) 00:13:23.937 3.390 - 3.413: 98.2588% ( 1) 00:13:23.937 3.413 - 3.437: 98.2744% ( 2) 00:13:23.937 3.437 - 3.461: 9[2024-11-20 09:04:00.560137] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:23.937 8.2978% ( 3) 00:13:23.937 3.461 - 3.484: 98.3134% ( 2) 00:13:23.937 3.484 - 3.508: 98.3212% ( 1) 00:13:23.937 3.508 - 3.532: 98.3290% ( 1) 00:13:23.937 3.532 - 3.556: 98.3368% ( 1) 00:13:23.937 3.556 - 3.579: 98.3525% ( 2) 00:13:23.937 3.579 - 3.603: 98.3681% ( 2) 00:13:23.937 3.603 - 3.627: 98.3837% ( 2) 00:13:23.937 3.627 - 3.650: 98.3993% ( 2) 00:13:23.937 3.674 - 3.698: 98.4071% ( 1) 00:13:23.937 3.698 - 3.721: 98.4149% ( 1) 00:13:23.937 3.721 - 3.745: 98.4227% ( 1) 00:13:23.938 3.745 - 3.769: 98.4305% ( 1) 00:13:23.938 3.840 - 3.864: 98.4384% ( 1) 00:13:23.938 3.864 - 3.887: 98.4462% ( 1) 00:13:23.938 3.887 - 3.911: 98.4540% ( 1) 00:13:23.938 3.911 - 3.935: 98.4618% ( 1) 00:13:23.938 3.935 - 3.959: 98.4852% ( 3) 00:13:23.938 3.959 - 3.982: 98.4930% ( 1) 00:13:23.938 4.006 - 4.030: 98.5008% ( 1) 00:13:23.938 4.196 - 4.219: 98.5086% ( 1) 00:13:23.938 4.741 - 4.764: 98.5164% ( 1) 00:13:23.938 4.859 - 4.883: 98.5242% ( 1) 00:13:23.938 5.310 - 5.333: 98.5399% ( 2) 00:13:23.938 5.428 - 5.452: 98.5477% ( 1) 00:13:23.938 5.523 - 5.547: 98.5555% ( 1) 00:13:23.938 5.547 - 5.570: 98.5633% ( 1) 00:13:23.938 5.784 - 5.807: 98.5789% ( 2) 00:13:23.938 5.807 - 5.831: 98.5867% ( 1) 00:13:23.938 5.879 - 5.902: 98.5945% ( 1) 00:13:23.938 6.068 - 6.116: 98.6023% ( 1) 00:13:23.938 6.116 - 6.163: 98.6101% ( 1) 00:13:23.938 6.210 - 6.258: 98.6258% ( 2) 00:13:23.938 6.258 - 6.305: 98.6414% ( 2) 00:13:23.938 6.353 - 6.400: 98.6492% ( 1) 00:13:23.938 6.542 - 6.590: 98.6648% ( 2) 00:13:23.938 6.732 - 6.779: 98.6726% ( 1) 00:13:23.938 7.680 - 7.727: 98.6804% ( 1) 00:13:23.938 7.727 - 7.775: 98.6882% ( 1) 00:13:23.938 7.917 - 7.964: 98.6960% ( 1) 00:13:23.938 8.059 - 8.107: 98.7038% ( 1) 00:13:23.938 8.344 - 8.391: 98.7195% ( 2) 00:13:23.938 15.455 - 15.550: 98.7273% ( 1) 00:13:23.938 15.550 - 15.644: 98.7351% ( 1) 00:13:23.938 15.739 - 15.834: 98.7429% ( 1) 00:13:23.938 15.834 - 15.929: 98.7819% ( 5) 00:13:23.938 15.929 - 16.024: 98.7897% ( 1) 00:13:23.938 16.024 - 16.119: 98.8678% ( 10) 00:13:23.938 16.119 - 16.213: 98.8912% ( 3) 00:13:23.938 16.213 - 16.308: 98.9147% ( 3) 00:13:23.938 16.308 - 16.403: 98.9459% ( 4) 00:13:23.938 16.403 - 16.498: 98.9771% ( 4) 00:13:23.938 16.498 - 16.593: 99.0708% ( 12) 00:13:23.938 16.593 - 16.687: 99.1177% ( 6) 00:13:23.938 16.687 - 16.782: 99.1879% ( 9) 00:13:23.938 16.782 - 16.877: 99.2582% ( 9) 00:13:23.938 16.877 - 16.972: 99.2895% ( 4) 00:13:23.938 17.067 - 17.161: 99.2973% ( 1) 00:13:23.938 17.161 - 17.256: 99.3129% ( 2) 00:13:23.938 17.256 - 17.351: 99.3207% ( 1) 00:13:23.938 17.351 - 17.446: 99.3285% ( 1) 00:13:23.938 17.446 - 17.541: 99.3363% ( 1) 00:13:23.938 17.541 - 17.636: 99.3441% ( 1) 00:13:23.938 17.730 - 17.825: 99.3519% ( 1) 00:13:23.938 17.920 - 18.015: 99.3597% ( 1) 00:13:23.938 3980.705 - 4004.978: 99.8829% ( 67) 00:13:23.938 4004.978 - 4029.250: 100.0000% ( 15) 00:13:23.938 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:23.938 [ 00:13:23.938 { 00:13:23.938 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:23.938 "subtype": "Discovery", 00:13:23.938 "listen_addresses": [], 00:13:23.938 "allow_any_host": true, 00:13:23.938 "hosts": [] 00:13:23.938 }, 00:13:23.938 { 00:13:23.938 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:23.938 "subtype": "NVMe", 00:13:23.938 "listen_addresses": [ 00:13:23.938 { 00:13:23.938 "trtype": "VFIOUSER", 00:13:23.938 "adrfam": "IPv4", 00:13:23.938 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:23.938 "trsvcid": "0" 00:13:23.938 } 00:13:23.938 ], 00:13:23.938 "allow_any_host": true, 00:13:23.938 "hosts": [], 00:13:23.938 "serial_number": "SPDK1", 00:13:23.938 "model_number": "SPDK bdev Controller", 00:13:23.938 "max_namespaces": 32, 00:13:23.938 "min_cntlid": 1, 00:13:23.938 "max_cntlid": 65519, 00:13:23.938 "namespaces": [ 00:13:23.938 { 00:13:23.938 "nsid": 1, 00:13:23.938 "bdev_name": "Malloc1", 00:13:23.938 "name": "Malloc1", 00:13:23.938 "nguid": "7E635452542C41E2A6EBB8F66401E07B", 00:13:23.938 "uuid": "7e635452-542c-41e2-a6eb-b8f66401e07b" 00:13:23.938 } 00:13:23.938 ] 00:13:23.938 }, 00:13:23.938 { 00:13:23.938 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:23.938 "subtype": "NVMe", 00:13:23.938 "listen_addresses": [ 00:13:23.938 { 00:13:23.938 "trtype": "VFIOUSER", 00:13:23.938 "adrfam": "IPv4", 00:13:23.938 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:23.938 "trsvcid": "0" 00:13:23.938 } 00:13:23.938 ], 00:13:23.938 "allow_any_host": true, 00:13:23.938 "hosts": [], 00:13:23.938 "serial_number": "SPDK2", 00:13:23.938 "model_number": "SPDK bdev Controller", 00:13:23.938 "max_namespaces": 32, 00:13:23.938 "min_cntlid": 1, 00:13:23.938 "max_cntlid": 65519, 00:13:23.938 "namespaces": [ 00:13:23.938 { 00:13:23.938 "nsid": 1, 00:13:23.938 "bdev_name": "Malloc2", 00:13:23.938 "name": "Malloc2", 00:13:23.938 "nguid": "794CFF3DC8D244F5A28A39D01D3CB54B", 00:13:23.938 "uuid": "794cff3d-c8d2-44f5-a28a-39d01d3cb54b" 00:13:23.938 } 00:13:23.938 ] 00:13:23.938 } 00:13:23.938 ] 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3304863 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:23.938 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:24.196 [2024-11-20 09:04:01.041803] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:24.196 Malloc3 00:13:24.196 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:24.453 [2024-11-20 09:04:01.441831] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:24.453 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:24.710 Asynchronous Event Request test 00:13:24.710 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:24.710 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:24.710 Registering asynchronous event callbacks... 00:13:24.710 Starting namespace attribute notice tests for all controllers... 00:13:24.710 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:24.710 aer_cb - Changed Namespace 00:13:24.710 Cleaning up... 00:13:24.710 [ 00:13:24.710 { 00:13:24.710 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:24.710 "subtype": "Discovery", 00:13:24.710 "listen_addresses": [], 00:13:24.710 "allow_any_host": true, 00:13:24.710 "hosts": [] 00:13:24.710 }, 00:13:24.710 { 00:13:24.710 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:24.710 "subtype": "NVMe", 00:13:24.710 "listen_addresses": [ 00:13:24.710 { 00:13:24.710 "trtype": "VFIOUSER", 00:13:24.710 "adrfam": "IPv4", 00:13:24.710 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:24.710 "trsvcid": "0" 00:13:24.710 } 00:13:24.710 ], 00:13:24.710 "allow_any_host": true, 00:13:24.710 "hosts": [], 00:13:24.710 "serial_number": "SPDK1", 00:13:24.710 "model_number": "SPDK bdev Controller", 00:13:24.710 "max_namespaces": 32, 00:13:24.710 "min_cntlid": 1, 00:13:24.710 "max_cntlid": 65519, 00:13:24.710 "namespaces": [ 00:13:24.710 { 00:13:24.710 "nsid": 1, 00:13:24.710 "bdev_name": "Malloc1", 00:13:24.710 "name": "Malloc1", 00:13:24.710 "nguid": "7E635452542C41E2A6EBB8F66401E07B", 00:13:24.710 "uuid": "7e635452-542c-41e2-a6eb-b8f66401e07b" 00:13:24.710 }, 00:13:24.710 { 00:13:24.710 "nsid": 2, 00:13:24.710 "bdev_name": "Malloc3", 00:13:24.710 "name": "Malloc3", 00:13:24.710 "nguid": "F36BC2465FAA4D5381BB03B9E0E996DB", 00:13:24.710 "uuid": "f36bc246-5faa-4d53-81bb-03b9e0e996db" 00:13:24.710 } 00:13:24.710 ] 00:13:24.710 }, 00:13:24.710 { 00:13:24.710 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:24.710 "subtype": "NVMe", 00:13:24.710 "listen_addresses": [ 00:13:24.710 { 00:13:24.710 "trtype": "VFIOUSER", 00:13:24.710 "adrfam": "IPv4", 00:13:24.710 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:24.710 "trsvcid": "0" 00:13:24.710 } 00:13:24.710 ], 00:13:24.710 "allow_any_host": true, 00:13:24.710 "hosts": [], 00:13:24.710 "serial_number": "SPDK2", 00:13:24.710 "model_number": "SPDK bdev Controller", 00:13:24.710 "max_namespaces": 32, 00:13:24.710 "min_cntlid": 1, 00:13:24.710 "max_cntlid": 65519, 00:13:24.710 "namespaces": [ 00:13:24.710 { 00:13:24.710 "nsid": 1, 00:13:24.710 "bdev_name": "Malloc2", 00:13:24.710 "name": "Malloc2", 00:13:24.710 "nguid": "794CFF3DC8D244F5A28A39D01D3CB54B", 00:13:24.710 "uuid": "794cff3d-c8d2-44f5-a28a-39d01d3cb54b" 00:13:24.710 } 00:13:24.710 ] 00:13:24.710 } 00:13:24.710 ] 00:13:24.710 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3304863 00:13:24.710 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:24.710 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:24.710 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:24.711 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:24.969 [2024-11-20 09:04:01.742604] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:13:24.969 [2024-11-20 09:04:01.742648] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3304999 ] 00:13:24.969 [2024-11-20 09:04:01.790736] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:24.969 [2024-11-20 09:04:01.803665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:24.969 [2024-11-20 09:04:01.803700] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc315548000 00:13:24.969 [2024-11-20 09:04:01.804664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.969 [2024-11-20 09:04:01.805669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.969 [2024-11-20 09:04:01.806677] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.969 [2024-11-20 09:04:01.807690] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:24.969 [2024-11-20 09:04:01.808686] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:24.969 [2024-11-20 09:04:01.809698] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.969 [2024-11-20 09:04:01.810708] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:24.969 [2024-11-20 09:04:01.811715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.969 [2024-11-20 09:04:01.812721] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:24.969 [2024-11-20 09:04:01.812743] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc31553d000 00:13:24.969 [2024-11-20 09:04:01.813903] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:24.969 [2024-11-20 09:04:01.828705] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:24.969 [2024-11-20 09:04:01.828748] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:24.969 [2024-11-20 09:04:01.830848] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:24.969 [2024-11-20 09:04:01.830905] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:24.969 [2024-11-20 09:04:01.830995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:24.969 [2024-11-20 09:04:01.831024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:24.969 [2024-11-20 09:04:01.831035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:24.969 [2024-11-20 09:04:01.831858] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:24.969 [2024-11-20 09:04:01.831880] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:24.969 [2024-11-20 09:04:01.831893] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:24.969 [2024-11-20 09:04:01.832859] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:24.970 [2024-11-20 09:04:01.832881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:24.970 [2024-11-20 09:04:01.832894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:24.970 [2024-11-20 09:04:01.833862] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:24.970 [2024-11-20 09:04:01.833882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:24.970 [2024-11-20 09:04:01.834872] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:24.970 [2024-11-20 09:04:01.834892] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:24.970 [2024-11-20 09:04:01.834901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:24.970 [2024-11-20 09:04:01.834913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:24.970 [2024-11-20 09:04:01.835022] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:24.970 [2024-11-20 09:04:01.835030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:24.970 [2024-11-20 09:04:01.835039] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:24.970 [2024-11-20 09:04:01.835878] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:24.970 [2024-11-20 09:04:01.836885] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:24.970 [2024-11-20 09:04:01.837893] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:24.970 [2024-11-20 09:04:01.838892] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.970 [2024-11-20 09:04:01.838969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:24.970 [2024-11-20 09:04:01.839918] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:24.970 [2024-11-20 09:04:01.839941] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:24.970 [2024-11-20 09:04:01.839956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.839981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:24.970 [2024-11-20 09:04:01.839999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.840024] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:24.970 [2024-11-20 09:04:01.840034] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.970 [2024-11-20 09:04:01.840041] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.970 [2024-11-20 09:04:01.840060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.970 [2024-11-20 09:04:01.846319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:24.970 [2024-11-20 09:04:01.846345] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:24.970 [2024-11-20 09:04:01.846365] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:24.970 [2024-11-20 09:04:01.846372] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:24.970 [2024-11-20 09:04:01.846380] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:24.970 [2024-11-20 09:04:01.846393] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:24.970 [2024-11-20 09:04:01.846403] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:24.970 [2024-11-20 09:04:01.846411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.846427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.846444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:24.970 [2024-11-20 09:04:01.854316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:24.970 [2024-11-20 09:04:01.854341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.970 [2024-11-20 09:04:01.854354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.970 [2024-11-20 09:04:01.854366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.970 [2024-11-20 09:04:01.854378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.970 [2024-11-20 09:04:01.854387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.854399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.854412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:24.970 [2024-11-20 09:04:01.862314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:24.970 [2024-11-20 09:04:01.862339] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:24.970 [2024-11-20 09:04:01.862351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.862363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.862373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.862387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:24.970 [2024-11-20 09:04:01.870316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:24.970 [2024-11-20 09:04:01.870396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.870414] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.870427] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:24.970 [2024-11-20 09:04:01.870435] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:24.970 [2024-11-20 09:04:01.870442] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.970 [2024-11-20 09:04:01.870451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:24.970 [2024-11-20 09:04:01.878311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:24.970 [2024-11-20 09:04:01.878337] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:24.970 [2024-11-20 09:04:01.878354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.878370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.878384] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:24.970 [2024-11-20 09:04:01.878392] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.970 [2024-11-20 09:04:01.878398] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.970 [2024-11-20 09:04:01.878407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.970 [2024-11-20 09:04:01.886315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:24.970 [2024-11-20 09:04:01.886349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.886366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.886380] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:24.970 [2024-11-20 09:04:01.886388] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.970 [2024-11-20 09:04:01.886398] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.970 [2024-11-20 09:04:01.886408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.970 [2024-11-20 09:04:01.894327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:24.970 [2024-11-20 09:04:01.894349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.894362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.894378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.894389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.894398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.894407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:24.970 [2024-11-20 09:04:01.894415] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:24.971 [2024-11-20 09:04:01.894423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:24.971 [2024-11-20 09:04:01.894432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:24.971 [2024-11-20 09:04:01.894459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:24.971 [2024-11-20 09:04:01.902315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:24.971 [2024-11-20 09:04:01.902340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:24.971 [2024-11-20 09:04:01.910314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:24.971 [2024-11-20 09:04:01.910339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:24.971 [2024-11-20 09:04:01.918315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:24.971 [2024-11-20 09:04:01.918340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:24.971 [2024-11-20 09:04:01.926315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:24.971 [2024-11-20 09:04:01.926357] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:24.971 [2024-11-20 09:04:01.926369] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:24.971 [2024-11-20 09:04:01.926375] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:24.971 [2024-11-20 09:04:01.926380] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:24.971 [2024-11-20 09:04:01.926386] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:24.971 [2024-11-20 09:04:01.926396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:24.971 [2024-11-20 09:04:01.926412] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:24.971 [2024-11-20 09:04:01.926421] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:24.971 [2024-11-20 09:04:01.926427] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.971 [2024-11-20 09:04:01.926436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:24.971 [2024-11-20 09:04:01.926447] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:24.971 [2024-11-20 09:04:01.926455] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.971 [2024-11-20 09:04:01.926461] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.971 [2024-11-20 09:04:01.926479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.971 [2024-11-20 09:04:01.926491] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:24.971 [2024-11-20 09:04:01.926499] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:24.971 [2024-11-20 09:04:01.926505] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.971 [2024-11-20 09:04:01.926514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:24.971 [2024-11-20 09:04:01.934315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:24.971 [2024-11-20 09:04:01.934344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:24.971 [2024-11-20 09:04:01.934362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:24.971 [2024-11-20 09:04:01.934374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:24.971 ===================================================== 00:13:24.971 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:24.971 ===================================================== 00:13:24.971 Controller Capabilities/Features 00:13:24.971 ================================ 00:13:24.971 Vendor ID: 4e58 00:13:24.971 Subsystem Vendor ID: 4e58 00:13:24.971 Serial Number: SPDK2 00:13:24.971 Model Number: SPDK bdev Controller 00:13:24.971 Firmware Version: 25.01 00:13:24.971 Recommended Arb Burst: 6 00:13:24.971 IEEE OUI Identifier: 8d 6b 50 00:13:24.971 Multi-path I/O 00:13:24.971 May have multiple subsystem ports: Yes 00:13:24.971 May have multiple controllers: Yes 00:13:24.971 Associated with SR-IOV VF: No 00:13:24.971 Max Data Transfer Size: 131072 00:13:24.971 Max Number of Namespaces: 32 00:13:24.971 Max Number of I/O Queues: 127 00:13:24.971 NVMe Specification Version (VS): 1.3 00:13:24.971 NVMe Specification Version (Identify): 1.3 00:13:24.971 Maximum Queue Entries: 256 00:13:24.971 Contiguous Queues Required: Yes 00:13:24.971 Arbitration Mechanisms Supported 00:13:24.971 Weighted Round Robin: Not Supported 00:13:24.971 Vendor Specific: Not Supported 00:13:24.971 Reset Timeout: 15000 ms 00:13:24.971 Doorbell Stride: 4 bytes 00:13:24.971 NVM Subsystem Reset: Not Supported 00:13:24.971 Command Sets Supported 00:13:24.971 NVM Command Set: Supported 00:13:24.971 Boot Partition: Not Supported 00:13:24.971 Memory Page Size Minimum: 4096 bytes 00:13:24.971 Memory Page Size Maximum: 4096 bytes 00:13:24.971 Persistent Memory Region: Not Supported 00:13:24.971 Optional Asynchronous Events Supported 00:13:24.971 Namespace Attribute Notices: Supported 00:13:24.971 Firmware Activation Notices: Not Supported 00:13:24.971 ANA Change Notices: Not Supported 00:13:24.971 PLE Aggregate Log Change Notices: Not Supported 00:13:24.971 LBA Status Info Alert Notices: Not Supported 00:13:24.971 EGE Aggregate Log Change Notices: Not Supported 00:13:24.971 Normal NVM Subsystem Shutdown event: Not Supported 00:13:24.971 Zone Descriptor Change Notices: Not Supported 00:13:24.971 Discovery Log Change Notices: Not Supported 00:13:24.971 Controller Attributes 00:13:24.971 128-bit Host Identifier: Supported 00:13:24.971 Non-Operational Permissive Mode: Not Supported 00:13:24.971 NVM Sets: Not Supported 00:13:24.971 Read Recovery Levels: Not Supported 00:13:24.971 Endurance Groups: Not Supported 00:13:24.971 Predictable Latency Mode: Not Supported 00:13:24.971 Traffic Based Keep ALive: Not Supported 00:13:24.971 Namespace Granularity: Not Supported 00:13:24.971 SQ Associations: Not Supported 00:13:24.971 UUID List: Not Supported 00:13:24.971 Multi-Domain Subsystem: Not Supported 00:13:24.971 Fixed Capacity Management: Not Supported 00:13:24.971 Variable Capacity Management: Not Supported 00:13:24.971 Delete Endurance Group: Not Supported 00:13:24.971 Delete NVM Set: Not Supported 00:13:24.971 Extended LBA Formats Supported: Not Supported 00:13:24.971 Flexible Data Placement Supported: Not Supported 00:13:24.971 00:13:24.971 Controller Memory Buffer Support 00:13:24.971 ================================ 00:13:24.971 Supported: No 00:13:24.971 00:13:24.971 Persistent Memory Region Support 00:13:24.971 ================================ 00:13:24.971 Supported: No 00:13:24.971 00:13:24.971 Admin Command Set Attributes 00:13:24.971 ============================ 00:13:24.971 Security Send/Receive: Not Supported 00:13:24.971 Format NVM: Not Supported 00:13:24.971 Firmware Activate/Download: Not Supported 00:13:24.971 Namespace Management: Not Supported 00:13:24.971 Device Self-Test: Not Supported 00:13:24.971 Directives: Not Supported 00:13:24.971 NVMe-MI: Not Supported 00:13:24.971 Virtualization Management: Not Supported 00:13:24.971 Doorbell Buffer Config: Not Supported 00:13:24.971 Get LBA Status Capability: Not Supported 00:13:24.971 Command & Feature Lockdown Capability: Not Supported 00:13:24.971 Abort Command Limit: 4 00:13:24.971 Async Event Request Limit: 4 00:13:24.971 Number of Firmware Slots: N/A 00:13:24.971 Firmware Slot 1 Read-Only: N/A 00:13:24.971 Firmware Activation Without Reset: N/A 00:13:24.971 Multiple Update Detection Support: N/A 00:13:24.971 Firmware Update Granularity: No Information Provided 00:13:24.971 Per-Namespace SMART Log: No 00:13:24.971 Asymmetric Namespace Access Log Page: Not Supported 00:13:24.971 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:24.971 Command Effects Log Page: Supported 00:13:24.971 Get Log Page Extended Data: Supported 00:13:24.971 Telemetry Log Pages: Not Supported 00:13:24.971 Persistent Event Log Pages: Not Supported 00:13:24.971 Supported Log Pages Log Page: May Support 00:13:24.971 Commands Supported & Effects Log Page: Not Supported 00:13:24.971 Feature Identifiers & Effects Log Page:May Support 00:13:24.971 NVMe-MI Commands & Effects Log Page: May Support 00:13:24.971 Data Area 4 for Telemetry Log: Not Supported 00:13:24.971 Error Log Page Entries Supported: 128 00:13:24.971 Keep Alive: Supported 00:13:24.971 Keep Alive Granularity: 10000 ms 00:13:24.971 00:13:24.971 NVM Command Set Attributes 00:13:24.971 ========================== 00:13:24.971 Submission Queue Entry Size 00:13:24.971 Max: 64 00:13:24.971 Min: 64 00:13:24.971 Completion Queue Entry Size 00:13:24.971 Max: 16 00:13:24.971 Min: 16 00:13:24.972 Number of Namespaces: 32 00:13:24.972 Compare Command: Supported 00:13:24.972 Write Uncorrectable Command: Not Supported 00:13:24.972 Dataset Management Command: Supported 00:13:24.972 Write Zeroes Command: Supported 00:13:24.972 Set Features Save Field: Not Supported 00:13:24.972 Reservations: Not Supported 00:13:24.972 Timestamp: Not Supported 00:13:24.972 Copy: Supported 00:13:24.972 Volatile Write Cache: Present 00:13:24.972 Atomic Write Unit (Normal): 1 00:13:24.972 Atomic Write Unit (PFail): 1 00:13:24.972 Atomic Compare & Write Unit: 1 00:13:24.972 Fused Compare & Write: Supported 00:13:24.972 Scatter-Gather List 00:13:24.972 SGL Command Set: Supported (Dword aligned) 00:13:24.972 SGL Keyed: Not Supported 00:13:24.972 SGL Bit Bucket Descriptor: Not Supported 00:13:24.972 SGL Metadata Pointer: Not Supported 00:13:24.972 Oversized SGL: Not Supported 00:13:24.972 SGL Metadata Address: Not Supported 00:13:24.972 SGL Offset: Not Supported 00:13:24.972 Transport SGL Data Block: Not Supported 00:13:24.972 Replay Protected Memory Block: Not Supported 00:13:24.972 00:13:24.972 Firmware Slot Information 00:13:24.972 ========================= 00:13:24.972 Active slot: 1 00:13:24.972 Slot 1 Firmware Revision: 25.01 00:13:24.972 00:13:24.972 00:13:24.972 Commands Supported and Effects 00:13:24.972 ============================== 00:13:24.972 Admin Commands 00:13:24.972 -------------- 00:13:24.972 Get Log Page (02h): Supported 00:13:24.972 Identify (06h): Supported 00:13:24.972 Abort (08h): Supported 00:13:24.972 Set Features (09h): Supported 00:13:24.972 Get Features (0Ah): Supported 00:13:24.972 Asynchronous Event Request (0Ch): Supported 00:13:24.972 Keep Alive (18h): Supported 00:13:24.972 I/O Commands 00:13:24.972 ------------ 00:13:24.972 Flush (00h): Supported LBA-Change 00:13:24.972 Write (01h): Supported LBA-Change 00:13:24.972 Read (02h): Supported 00:13:24.972 Compare (05h): Supported 00:13:24.972 Write Zeroes (08h): Supported LBA-Change 00:13:24.972 Dataset Management (09h): Supported LBA-Change 00:13:24.972 Copy (19h): Supported LBA-Change 00:13:24.972 00:13:24.972 Error Log 00:13:24.972 ========= 00:13:24.972 00:13:24.972 Arbitration 00:13:24.972 =========== 00:13:24.972 Arbitration Burst: 1 00:13:24.972 00:13:24.972 Power Management 00:13:24.972 ================ 00:13:24.972 Number of Power States: 1 00:13:24.972 Current Power State: Power State #0 00:13:24.972 Power State #0: 00:13:24.972 Max Power: 0.00 W 00:13:24.972 Non-Operational State: Operational 00:13:24.972 Entry Latency: Not Reported 00:13:24.972 Exit Latency: Not Reported 00:13:24.972 Relative Read Throughput: 0 00:13:24.972 Relative Read Latency: 0 00:13:24.972 Relative Write Throughput: 0 00:13:24.972 Relative Write Latency: 0 00:13:24.972 Idle Power: Not Reported 00:13:24.972 Active Power: Not Reported 00:13:24.972 Non-Operational Permissive Mode: Not Supported 00:13:24.972 00:13:24.972 Health Information 00:13:24.972 ================== 00:13:24.972 Critical Warnings: 00:13:24.972 Available Spare Space: OK 00:13:24.972 Temperature: OK 00:13:24.972 Device Reliability: OK 00:13:24.972 Read Only: No 00:13:24.972 Volatile Memory Backup: OK 00:13:24.972 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:24.972 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:24.972 Available Spare: 0% 00:13:24.972 Available Sp[2024-11-20 09:04:01.934505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:24.972 [2024-11-20 09:04:01.942315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:24.972 [2024-11-20 09:04:01.942384] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:24.972 [2024-11-20 09:04:01.942403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.972 [2024-11-20 09:04:01.942414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.972 [2024-11-20 09:04:01.942424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.972 [2024-11-20 09:04:01.942433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.972 [2024-11-20 09:04:01.942499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:24.972 [2024-11-20 09:04:01.942521] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:24.972 [2024-11-20 09:04:01.943507] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.972 [2024-11-20 09:04:01.943592] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:24.972 [2024-11-20 09:04:01.943626] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:24.972 [2024-11-20 09:04:01.946313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:24.972 [2024-11-20 09:04:01.946339] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 2 milliseconds 00:13:24.972 [2024-11-20 09:04:01.946393] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:24.972 [2024-11-20 09:04:01.947592] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:24.972 are Threshold: 0% 00:13:24.972 Life Percentage Used: 0% 00:13:24.972 Data Units Read: 0 00:13:24.972 Data Units Written: 0 00:13:24.972 Host Read Commands: 0 00:13:24.972 Host Write Commands: 0 00:13:24.972 Controller Busy Time: 0 minutes 00:13:24.972 Power Cycles: 0 00:13:24.972 Power On Hours: 0 hours 00:13:24.972 Unsafe Shutdowns: 0 00:13:24.972 Unrecoverable Media Errors: 0 00:13:24.972 Lifetime Error Log Entries: 0 00:13:24.972 Warning Temperature Time: 0 minutes 00:13:24.972 Critical Temperature Time: 0 minutes 00:13:24.972 00:13:24.972 Number of Queues 00:13:24.972 ================ 00:13:24.972 Number of I/O Submission Queues: 127 00:13:24.972 Number of I/O Completion Queues: 127 00:13:24.972 00:13:24.972 Active Namespaces 00:13:24.972 ================= 00:13:24.972 Namespace ID:1 00:13:24.972 Error Recovery Timeout: Unlimited 00:13:24.972 Command Set Identifier: NVM (00h) 00:13:24.972 Deallocate: Supported 00:13:24.972 Deallocated/Unwritten Error: Not Supported 00:13:24.972 Deallocated Read Value: Unknown 00:13:24.972 Deallocate in Write Zeroes: Not Supported 00:13:24.972 Deallocated Guard Field: 0xFFFF 00:13:24.972 Flush: Supported 00:13:24.972 Reservation: Supported 00:13:24.972 Namespace Sharing Capabilities: Multiple Controllers 00:13:24.972 Size (in LBAs): 131072 (0GiB) 00:13:24.972 Capacity (in LBAs): 131072 (0GiB) 00:13:24.972 Utilization (in LBAs): 131072 (0GiB) 00:13:24.972 NGUID: 794CFF3DC8D244F5A28A39D01D3CB54B 00:13:24.972 UUID: 794cff3d-c8d2-44f5-a28a-39d01d3cb54b 00:13:24.972 Thin Provisioning: Not Supported 00:13:24.972 Per-NS Atomic Units: Yes 00:13:24.972 Atomic Boundary Size (Normal): 0 00:13:24.972 Atomic Boundary Size (PFail): 0 00:13:24.972 Atomic Boundary Offset: 0 00:13:24.972 Maximum Single Source Range Length: 65535 00:13:24.972 Maximum Copy Length: 65535 00:13:24.972 Maximum Source Range Count: 1 00:13:24.972 NGUID/EUI64 Never Reused: No 00:13:24.972 Namespace Write Protected: No 00:13:24.972 Number of LBA Formats: 1 00:13:24.972 Current LBA Format: LBA Format #00 00:13:24.972 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:24.972 00:13:24.972 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:25.229 [2024-11-20 09:04:02.200044] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:30.514 Initializing NVMe Controllers 00:13:30.514 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:30.514 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:30.514 Initialization complete. Launching workers. 00:13:30.514 ======================================================== 00:13:30.514 Latency(us) 00:13:30.514 Device Information : IOPS MiB/s Average min max 00:13:30.514 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33586.51 131.20 3810.45 1175.62 7619.93 00:13:30.514 ======================================================== 00:13:30.514 Total : 33586.51 131.20 3810.45 1175.62 7619.93 00:13:30.514 00:13:30.514 [2024-11-20 09:04:07.303686] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:30.514 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:30.770 [2024-11-20 09:04:07.566448] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:36.027 Initializing NVMe Controllers 00:13:36.027 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:36.027 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:36.027 Initialization complete. Launching workers. 00:13:36.027 ======================================================== 00:13:36.027 Latency(us) 00:13:36.027 Device Information : IOPS MiB/s Average min max 00:13:36.027 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31507.04 123.07 4061.67 1202.46 8525.79 00:13:36.027 ======================================================== 00:13:36.027 Total : 31507.04 123.07 4061.67 1202.46 8525.79 00:13:36.027 00:13:36.027 [2024-11-20 09:04:12.587063] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:36.027 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:36.027 [2024-11-20 09:04:12.807013] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:41.284 [2024-11-20 09:04:17.937464] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:41.284 Initializing NVMe Controllers 00:13:41.284 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:41.284 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:41.284 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:41.284 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:41.284 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:41.284 Initialization complete. Launching workers. 00:13:41.284 Starting thread on core 2 00:13:41.284 Starting thread on core 3 00:13:41.284 Starting thread on core 1 00:13:41.284 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:41.284 [2024-11-20 09:04:18.270828] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:44.560 [2024-11-20 09:04:21.450745] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:44.560 Initializing NVMe Controllers 00:13:44.560 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.560 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.560 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:44.560 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:44.560 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:44.560 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:44.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:44.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:44.560 Initialization complete. Launching workers. 00:13:44.560 Starting thread on core 1 with urgent priority queue 00:13:44.560 Starting thread on core 2 with urgent priority queue 00:13:44.560 Starting thread on core 3 with urgent priority queue 00:13:44.560 Starting thread on core 0 with urgent priority queue 00:13:44.560 SPDK bdev Controller (SPDK2 ) core 0: 2401.67 IO/s 41.64 secs/100000 ios 00:13:44.560 SPDK bdev Controller (SPDK2 ) core 1: 3030.00 IO/s 33.00 secs/100000 ios 00:13:44.560 SPDK bdev Controller (SPDK2 ) core 2: 3053.33 IO/s 32.75 secs/100000 ios 00:13:44.560 SPDK bdev Controller (SPDK2 ) core 3: 3111.00 IO/s 32.14 secs/100000 ios 00:13:44.560 ======================================================== 00:13:44.560 00:13:44.560 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:44.859 [2024-11-20 09:04:21.773825] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:44.859 Initializing NVMe Controllers 00:13:44.859 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.859 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.859 Namespace ID: 1 size: 0GB 00:13:44.859 Initialization complete. 00:13:44.859 INFO: using host memory buffer for IO 00:13:44.859 Hello world! 00:13:44.859 [2024-11-20 09:04:21.787920] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:44.859 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:45.140 [2024-11-20 09:04:22.103828] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:46.518 Initializing NVMe Controllers 00:13:46.518 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:46.518 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:46.518 Initialization complete. Launching workers. 00:13:46.518 submit (in ns) avg, min, max = 8506.6, 3570.0, 4018594.4 00:13:46.518 complete (in ns) avg, min, max = 25788.9, 2061.1, 4019028.9 00:13:46.518 00:13:46.518 Submit histogram 00:13:46.518 ================ 00:13:46.518 Range in us Cumulative Count 00:13:46.518 3.556 - 3.579: 0.3550% ( 47) 00:13:46.518 3.579 - 3.603: 4.6073% ( 563) 00:13:46.518 3.603 - 3.627: 10.7704% ( 816) 00:13:46.518 3.627 - 3.650: 20.5363% ( 1293) 00:13:46.518 3.650 - 3.674: 28.4441% ( 1047) 00:13:46.518 3.674 - 3.698: 37.0091% ( 1134) 00:13:46.518 3.698 - 3.721: 44.5695% ( 1001) 00:13:46.518 3.721 - 3.745: 50.7628% ( 820) 00:13:46.518 3.745 - 3.769: 55.2417% ( 593) 00:13:46.518 3.769 - 3.793: 59.9622% ( 625) 00:13:46.518 3.793 - 3.816: 63.4894% ( 467) 00:13:46.518 3.816 - 3.840: 67.0921% ( 477) 00:13:46.518 3.840 - 3.864: 71.6767% ( 607) 00:13:46.518 3.864 - 3.887: 75.6647% ( 528) 00:13:46.518 3.887 - 3.911: 79.9698% ( 570) 00:13:46.518 3.911 - 3.935: 83.3233% ( 444) 00:13:46.518 3.935 - 3.959: 85.8082% ( 329) 00:13:46.518 3.959 - 3.982: 87.8097% ( 265) 00:13:46.518 3.982 - 4.006: 89.5770% ( 234) 00:13:46.518 4.006 - 4.030: 91.1027% ( 202) 00:13:46.518 4.030 - 4.053: 92.4094% ( 173) 00:13:46.518 4.053 - 4.077: 93.3912% ( 130) 00:13:46.518 4.077 - 4.101: 94.2976% ( 120) 00:13:46.518 4.101 - 4.124: 95.0906% ( 105) 00:13:46.518 4.124 - 4.148: 95.6722% ( 77) 00:13:46.518 4.148 - 4.172: 96.0196% ( 46) 00:13:46.518 4.172 - 4.196: 96.2462% ( 30) 00:13:46.518 4.196 - 4.219: 96.3746% ( 17) 00:13:46.518 4.219 - 4.243: 96.5030% ( 17) 00:13:46.518 4.243 - 4.267: 96.6541% ( 20) 00:13:46.518 4.267 - 4.290: 96.7825% ( 17) 00:13:46.518 4.290 - 4.314: 96.8656% ( 11) 00:13:46.518 4.314 - 4.338: 96.9562% ( 12) 00:13:46.518 4.338 - 4.361: 97.0468% ( 12) 00:13:46.518 4.361 - 4.385: 97.0921% ( 6) 00:13:46.518 4.385 - 4.409: 97.1073% ( 2) 00:13:46.518 4.409 - 4.433: 97.1450% ( 5) 00:13:46.518 4.433 - 4.456: 97.1601% ( 2) 00:13:46.518 4.456 - 4.480: 97.1828% ( 3) 00:13:46.518 4.480 - 4.504: 97.1903% ( 1) 00:13:46.518 4.527 - 4.551: 97.1979% ( 1) 00:13:46.518 4.551 - 4.575: 97.2130% ( 2) 00:13:46.518 4.575 - 4.599: 97.2281% ( 2) 00:13:46.518 4.599 - 4.622: 97.2583% ( 4) 00:13:46.518 4.622 - 4.646: 97.2734% ( 2) 00:13:46.518 4.646 - 4.670: 97.2885% ( 2) 00:13:46.518 4.670 - 4.693: 97.3036% ( 2) 00:13:46.518 4.693 - 4.717: 97.3338% ( 4) 00:13:46.518 4.717 - 4.741: 97.3565% ( 3) 00:13:46.518 4.741 - 4.764: 97.3716% ( 2) 00:13:46.518 4.764 - 4.788: 97.4169% ( 6) 00:13:46.518 4.788 - 4.812: 97.4622% ( 6) 00:13:46.518 4.812 - 4.836: 97.4849% ( 3) 00:13:46.518 4.836 - 4.859: 97.5453% ( 8) 00:13:46.518 4.859 - 4.883: 97.5831% ( 5) 00:13:46.518 4.883 - 4.907: 97.6133% ( 4) 00:13:46.518 4.907 - 4.930: 97.6737% ( 8) 00:13:46.518 4.930 - 4.954: 97.7341% ( 8) 00:13:46.518 4.954 - 4.978: 97.7644% ( 4) 00:13:46.518 4.978 - 5.001: 97.7870% ( 3) 00:13:46.518 5.001 - 5.025: 97.8021% ( 2) 00:13:46.518 5.025 - 5.049: 97.8323% ( 4) 00:13:46.518 5.049 - 5.073: 97.8474% ( 2) 00:13:46.518 5.073 - 5.096: 97.8852% ( 5) 00:13:46.518 5.096 - 5.120: 97.9003% ( 2) 00:13:46.519 5.120 - 5.144: 97.9154% ( 2) 00:13:46.519 5.144 - 5.167: 97.9456% ( 4) 00:13:46.519 5.167 - 5.191: 97.9532% ( 1) 00:13:46.519 5.191 - 5.215: 97.9683% ( 2) 00:13:46.519 5.239 - 5.262: 97.9758% ( 1) 00:13:46.519 5.262 - 5.286: 97.9834% ( 1) 00:13:46.519 5.310 - 5.333: 97.9909% ( 1) 00:13:46.519 5.333 - 5.357: 97.9985% ( 1) 00:13:46.519 5.357 - 5.381: 98.0136% ( 2) 00:13:46.519 5.404 - 5.428: 98.0211% ( 1) 00:13:46.519 5.476 - 5.499: 98.0287% ( 1) 00:13:46.519 5.523 - 5.547: 98.0363% ( 1) 00:13:46.519 5.665 - 5.689: 98.0438% ( 1) 00:13:46.519 5.760 - 5.784: 98.0514% ( 1) 00:13:46.519 5.831 - 5.855: 98.0740% ( 3) 00:13:46.519 5.902 - 5.926: 98.0816% ( 1) 00:13:46.519 5.926 - 5.950: 98.0891% ( 1) 00:13:46.519 6.044 - 6.068: 98.0967% ( 1) 00:13:46.519 6.068 - 6.116: 98.1118% ( 2) 00:13:46.519 6.116 - 6.163: 98.1193% ( 1) 00:13:46.519 6.163 - 6.210: 98.1269% ( 1) 00:13:46.519 6.210 - 6.258: 98.1344% ( 1) 00:13:46.519 6.447 - 6.495: 98.1420% ( 1) 00:13:46.519 6.495 - 6.542: 98.1495% ( 1) 00:13:46.519 6.542 - 6.590: 98.1722% ( 3) 00:13:46.519 6.684 - 6.732: 98.1873% ( 2) 00:13:46.519 6.827 - 6.874: 98.2024% ( 2) 00:13:46.519 6.921 - 6.969: 98.2100% ( 1) 00:13:46.519 7.016 - 7.064: 98.2251% ( 2) 00:13:46.519 7.064 - 7.111: 98.2477% ( 3) 00:13:46.519 7.159 - 7.206: 98.2704% ( 3) 00:13:46.519 7.253 - 7.301: 98.2779% ( 1) 00:13:46.519 7.301 - 7.348: 98.2855% ( 1) 00:13:46.519 7.348 - 7.396: 98.2931% ( 1) 00:13:46.519 7.396 - 7.443: 98.3006% ( 1) 00:13:46.519 7.443 - 7.490: 98.3157% ( 2) 00:13:46.519 7.538 - 7.585: 98.3233% ( 1) 00:13:46.519 7.585 - 7.633: 98.3308% ( 1) 00:13:46.519 7.680 - 7.727: 98.3459% ( 2) 00:13:46.519 7.727 - 7.775: 98.3535% ( 1) 00:13:46.519 7.775 - 7.822: 98.3686% ( 2) 00:13:46.519 7.822 - 7.870: 98.3837% ( 2) 00:13:46.519 7.870 - 7.917: 98.3988% ( 2) 00:13:46.519 7.917 - 7.964: 98.4139% ( 2) 00:13:46.519 7.964 - 8.012: 98.4215% ( 1) 00:13:46.519 8.012 - 8.059: 98.4366% ( 2) 00:13:46.519 8.059 - 8.107: 98.4517% ( 2) 00:13:46.519 8.107 - 8.154: 98.4592% ( 1) 00:13:46.519 8.154 - 8.201: 98.4743% ( 2) 00:13:46.519 8.201 - 8.249: 98.4819% ( 1) 00:13:46.519 8.249 - 8.296: 98.4970% ( 2) 00:13:46.519 8.296 - 8.344: 98.5045% ( 1) 00:13:46.519 8.391 - 8.439: 98.5121% ( 1) 00:13:46.519 8.439 - 8.486: 98.5196% ( 1) 00:13:46.519 8.486 - 8.533: 98.5272% ( 1) 00:13:46.519 8.533 - 8.581: 98.5423% ( 2) 00:13:46.519 8.581 - 8.628: 98.5650% ( 3) 00:13:46.519 8.770 - 8.818: 98.5801% ( 2) 00:13:46.519 8.818 - 8.865: 98.5876% ( 1) 00:13:46.519 9.055 - 9.102: 98.5952% ( 1) 00:13:46.519 9.102 - 9.150: 98.6027% ( 1) 00:13:46.519 9.292 - 9.339: 98.6103% ( 1) 00:13:46.519 9.529 - 9.576: 98.6178% ( 1) 00:13:46.519 9.719 - 9.766: 98.6254% ( 1) 00:13:46.519 9.766 - 9.813: 98.6329% ( 1) 00:13:46.519 10.003 - 10.050: 98.6405% ( 1) 00:13:46.519 10.145 - 10.193: 98.6480% ( 1) 00:13:46.519 10.287 - 10.335: 98.6556% ( 1) 00:13:46.519 10.335 - 10.382: 98.6707% ( 2) 00:13:46.519 10.619 - 10.667: 98.6782% ( 1) 00:13:46.519 10.714 - 10.761: 98.6934% ( 2) 00:13:46.519 10.809 - 10.856: 98.7085% ( 2) 00:13:46.519 10.951 - 10.999: 98.7160% ( 1) 00:13:46.519 10.999 - 11.046: 98.7236% ( 1) 00:13:46.519 11.093 - 11.141: 98.7311% ( 1) 00:13:46.519 11.236 - 11.283: 98.7387% ( 1) 00:13:46.519 11.473 - 11.520: 98.7462% ( 1) 00:13:46.519 11.852 - 11.899: 98.7538% ( 1) 00:13:46.519 11.947 - 11.994: 98.7613% ( 1) 00:13:46.519 11.994 - 12.041: 98.7689% ( 1) 00:13:46.519 12.041 - 12.089: 98.7764% ( 1) 00:13:46.519 12.231 - 12.326: 98.7915% ( 2) 00:13:46.519 12.326 - 12.421: 98.8142% ( 3) 00:13:46.519 12.610 - 12.705: 98.8218% ( 1) 00:13:46.519 12.800 - 12.895: 98.8293% ( 1) 00:13:46.519 12.895 - 12.990: 98.8369% ( 1) 00:13:46.519 13.274 - 13.369: 98.8520% ( 2) 00:13:46.519 13.464 - 13.559: 98.8595% ( 1) 00:13:46.519 13.559 - 13.653: 98.8746% ( 2) 00:13:46.519 13.653 - 13.748: 98.8822% ( 1) 00:13:46.519 13.748 - 13.843: 98.8897% ( 1) 00:13:46.519 13.843 - 13.938: 98.8973% ( 1) 00:13:46.519 14.317 - 14.412: 98.9124% ( 2) 00:13:46.519 14.507 - 14.601: 98.9199% ( 1) 00:13:46.519 14.696 - 14.791: 98.9275% ( 1) 00:13:46.519 14.791 - 14.886: 98.9350% ( 1) 00:13:46.519 14.981 - 15.076: 98.9426% ( 1) 00:13:46.519 16.972 - 17.067: 98.9502% ( 1) 00:13:46.519 17.161 - 17.256: 98.9728% ( 3) 00:13:46.519 17.256 - 17.351: 98.9955% ( 3) 00:13:46.519 17.351 - 17.446: 99.0106% ( 2) 00:13:46.519 17.446 - 17.541: 99.0408% ( 4) 00:13:46.519 17.541 - 17.636: 99.0634% ( 3) 00:13:46.519 17.636 - 17.730: 99.1088% ( 6) 00:13:46.519 17.730 - 17.825: 99.1616% ( 7) 00:13:46.519 17.825 - 17.920: 99.2221% ( 8) 00:13:46.519 17.920 - 18.015: 99.2825% ( 8) 00:13:46.519 18.015 - 18.110: 99.3656% ( 11) 00:13:46.519 18.110 - 18.204: 99.3882% ( 3) 00:13:46.519 18.204 - 18.299: 99.4713% ( 11) 00:13:46.519 18.299 - 18.394: 99.5317% ( 8) 00:13:46.519 18.394 - 18.489: 99.5846% ( 7) 00:13:46.519 18.489 - 18.584: 99.6375% ( 7) 00:13:46.519 18.584 - 18.679: 99.6601% ( 3) 00:13:46.519 18.679 - 18.773: 99.7130% ( 7) 00:13:46.519 18.773 - 18.868: 99.7205% ( 1) 00:13:46.519 18.868 - 18.963: 99.7659% ( 6) 00:13:46.519 19.058 - 19.153: 99.7734% ( 1) 00:13:46.519 19.247 - 19.342: 99.7810% ( 1) 00:13:46.519 19.342 - 19.437: 99.7885% ( 1) 00:13:46.519 19.532 - 19.627: 99.7961% ( 1) 00:13:46.519 19.721 - 19.816: 99.8036% ( 1) 00:13:46.519 19.911 - 20.006: 99.8112% ( 1) 00:13:46.519 20.480 - 20.575: 99.8263% ( 2) 00:13:46.519 20.764 - 20.859: 99.8338% ( 1) 00:13:46.519 20.954 - 21.049: 99.8414% ( 1) 00:13:46.519 21.049 - 21.144: 99.8489% ( 1) 00:13:46.519 21.902 - 21.997: 99.8565% ( 1) 00:13:46.519 22.376 - 22.471: 99.8640% ( 1) 00:13:46.519 26.359 - 26.548: 99.8716% ( 1) 00:13:46.519 26.548 - 26.738: 99.8792% ( 1) 00:13:46.519 28.824 - 29.013: 99.8867% ( 1) 00:13:46.519 3980.705 - 4004.978: 99.9773% ( 12) 00:13:46.519 4004.978 - 4029.250: 100.0000% ( 3) 00:13:46.519 00:13:46.519 Complete histogram 00:13:46.519 ================== 00:13:46.519 Range in us Cumulative Count 00:13:46.519 2.050 - 2.062: 0.0076% ( 1) 00:13:46.519 2.062 - 2.074: 14.2976% ( 1892) 00:13:46.519 2.074 - 2.086: 44.3731% ( 3982) 00:13:46.519 2.086 - 2.098: 46.4879% ( 280) 00:13:46.519 2.098 - 2.110: 52.3640% ( 778) 00:13:46.519 2.110 - 2.121: 57.0317% ( 618) 00:13:46.519 2.121 - 2.133: 58.7538% ( 228) 00:13:46.519 2.133 - 2.145: 69.5921% ( 1435) 00:13:46.519 2.145 - 2.157: 77.3792% ( 1031) 00:13:46.519 2.157 - 2.169: 78.2704% ( 118) 00:13:46.519 2.169 - 2.181: 80.9215% ( 351) 00:13:46.519 2.181 - 2.193: 82.9834% ( 273) 00:13:46.519 2.193 - 2.204: 83.5272% ( 72) 00:13:46.519 2.204 - 2.216: 86.8429% ( 439) 00:13:46.519 2.216 - 2.228: 90.1360% ( 436) 00:13:46.519 2.228 - 2.240: 91.9486% ( 240) 00:13:46.519 2.240 - 2.252: 93.0514% ( 146) 00:13:46.519 2.252 - 2.264: 93.7538% ( 93) 00:13:46.519 2.264 - 2.276: 94.0030% ( 33) 00:13:46.519 2.276 - 2.287: 94.2749% ( 36) 00:13:46.519 2.287 - 2.299: 94.7356% ( 61) 00:13:46.519 2.299 - 2.311: 95.2568% ( 69) 00:13:46.519 2.311 - 2.323: 95.5211% ( 35) 00:13:46.519 2.323 - 2.335: 95.5665% ( 6) 00:13:46.519 2.335 - 2.347: 95.6571% ( 12) 00:13:46.519 2.347 - 2.359: 95.9063% ( 33) 00:13:46.519 2.359 - 2.370: 96.2311% ( 43) 00:13:46.519 2.370 - 2.382: 96.6465% ( 55) 00:13:46.519 2.382 - 2.394: 97.2054% ( 74) 00:13:46.519 2.394 - 2.406: 97.4471% ( 32) 00:13:46.519 2.406 - 2.418: 97.5831% ( 18) 00:13:46.519 2.418 - 2.430: 97.7341% ( 20) 00:13:46.519 2.430 - 2.441: 97.9230% ( 25) 00:13:46.519 2.441 - 2.453: 98.0665% ( 19) 00:13:46.519 2.453 - 2.465: 98.2175% ( 20) 00:13:46.519 2.465 - 2.477: 98.3006% ( 11) 00:13:46.519 2.477 - 2.489: 98.3837% ( 11) 00:13:46.519 2.489 - 2.501: 98.4139% ( 4) 00:13:46.519 2.501 - 2.513: 98.4290% ( 2) 00:13:46.519 2.513 - 2.524: 98.4592% ( 4) 00:13:46.519 2.524 - 2.536: 98.4668% ( 1) 00:13:46.519 2.536 - 2.548: 98.4819% ( 2) 00:13:46.519 2.548 - 2.560: 98.4970% ( 2) 00:13:46.519 2.560 - 2.572: 98.5196% ( 3) 00:13:46.519 2.572 - 2.584: 98.5272% ( 1) 00:13:46.519 2.596 - 2.607: 98.5347% ( 1) 00:13:46.519 2.619 - 2.631: 98.5423% ( 1) 00:13:46.519 2.631 - 2.643: 98.5498% ( 1) 00:13:46.519 2.655 - 2.667: 98.5650% ( 2) 00:13:46.519 2.667 - 2.679: 98.5725% ( 1) 00:13:46.520 2.679 - 2.690: 98.5801% ( 1) 00:13:46.520 2.797 - 2.809: 98.5876% ( 1) 00:13:46.520 2.809 - 2.821: 9[2024-11-20 09:04:23.209150] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:46.520 8.5952% ( 1) 00:13:46.520 2.999 - 3.010: 98.6027% ( 1) 00:13:46.520 3.271 - 3.295: 98.6103% ( 1) 00:13:46.520 3.366 - 3.390: 98.6178% ( 1) 00:13:46.520 3.390 - 3.413: 98.6254% ( 1) 00:13:46.520 3.437 - 3.461: 98.6329% ( 1) 00:13:46.520 3.461 - 3.484: 98.6480% ( 2) 00:13:46.520 3.532 - 3.556: 98.6707% ( 3) 00:13:46.520 3.556 - 3.579: 98.6858% ( 2) 00:13:46.520 3.627 - 3.650: 98.7009% ( 2) 00:13:46.520 3.650 - 3.674: 98.7160% ( 2) 00:13:46.520 3.745 - 3.769: 98.7236% ( 1) 00:13:46.520 3.816 - 3.840: 98.7311% ( 1) 00:13:46.520 3.840 - 3.864: 98.7387% ( 1) 00:13:46.520 3.959 - 3.982: 98.7462% ( 1) 00:13:46.520 4.030 - 4.053: 98.7538% ( 1) 00:13:46.520 4.148 - 4.172: 98.7689% ( 2) 00:13:46.520 5.049 - 5.073: 98.7764% ( 1) 00:13:46.520 5.167 - 5.191: 98.7840% ( 1) 00:13:46.520 5.286 - 5.310: 98.7915% ( 1) 00:13:46.520 5.428 - 5.452: 98.7991% ( 1) 00:13:46.520 5.618 - 5.641: 98.8066% ( 1) 00:13:46.520 5.997 - 6.021: 98.8142% ( 1) 00:13:46.520 6.163 - 6.210: 98.8293% ( 2) 00:13:46.520 6.353 - 6.400: 98.8369% ( 1) 00:13:46.520 6.447 - 6.495: 98.8444% ( 1) 00:13:46.520 6.495 - 6.542: 98.8520% ( 1) 00:13:46.520 6.542 - 6.590: 98.8595% ( 1) 00:13:46.520 6.637 - 6.684: 98.8671% ( 1) 00:13:46.520 6.779 - 6.827: 98.8746% ( 1) 00:13:46.520 6.874 - 6.921: 98.8822% ( 1) 00:13:46.520 7.159 - 7.206: 98.8897% ( 1) 00:13:46.520 8.249 - 8.296: 98.8973% ( 1) 00:13:46.520 15.550 - 15.644: 98.9124% ( 2) 00:13:46.520 15.644 - 15.739: 98.9199% ( 1) 00:13:46.520 15.834 - 15.929: 98.9426% ( 3) 00:13:46.520 15.929 - 16.024: 98.9502% ( 1) 00:13:46.520 16.024 - 16.119: 98.9879% ( 5) 00:13:46.520 16.119 - 16.213: 99.0408% ( 7) 00:13:46.520 16.213 - 16.308: 99.0785% ( 5) 00:13:46.520 16.308 - 16.403: 99.1163% ( 5) 00:13:46.520 16.403 - 16.498: 99.1465% ( 4) 00:13:46.520 16.498 - 16.593: 99.1692% ( 3) 00:13:46.520 16.593 - 16.687: 99.1918% ( 3) 00:13:46.520 16.687 - 16.782: 99.2296% ( 5) 00:13:46.520 16.782 - 16.877: 99.2598% ( 4) 00:13:46.520 16.877 - 16.972: 99.2825% ( 3) 00:13:46.520 16.972 - 17.067: 99.2900% ( 1) 00:13:46.520 17.067 - 17.161: 99.2976% ( 1) 00:13:46.520 17.161 - 17.256: 99.3051% ( 1) 00:13:46.520 17.256 - 17.351: 99.3202% ( 2) 00:13:46.520 17.446 - 17.541: 99.3505% ( 4) 00:13:46.520 17.730 - 17.825: 99.3580% ( 1) 00:13:46.520 17.920 - 18.015: 99.3731% ( 2) 00:13:46.520 18.204 - 18.299: 99.3807% ( 1) 00:13:46.520 18.394 - 18.489: 99.3882% ( 1) 00:13:46.520 18.679 - 18.773: 99.3958% ( 1) 00:13:46.520 18.773 - 18.868: 99.4033% ( 1) 00:13:46.520 22.850 - 22.945: 99.4109% ( 1) 00:13:46.520 3980.705 - 4004.978: 99.8112% ( 53) 00:13:46.520 4004.978 - 4029.250: 100.0000% ( 25) 00:13:46.520 00:13:46.520 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:46.520 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:46.520 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:46.520 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:46.520 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:46.777 [ 00:13:46.777 { 00:13:46.777 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:46.777 "subtype": "Discovery", 00:13:46.777 "listen_addresses": [], 00:13:46.777 "allow_any_host": true, 00:13:46.777 "hosts": [] 00:13:46.777 }, 00:13:46.777 { 00:13:46.777 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:46.777 "subtype": "NVMe", 00:13:46.777 "listen_addresses": [ 00:13:46.777 { 00:13:46.777 "trtype": "VFIOUSER", 00:13:46.777 "adrfam": "IPv4", 00:13:46.777 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:46.777 "trsvcid": "0" 00:13:46.777 } 00:13:46.777 ], 00:13:46.777 "allow_any_host": true, 00:13:46.777 "hosts": [], 00:13:46.777 "serial_number": "SPDK1", 00:13:46.777 "model_number": "SPDK bdev Controller", 00:13:46.777 "max_namespaces": 32, 00:13:46.777 "min_cntlid": 1, 00:13:46.777 "max_cntlid": 65519, 00:13:46.777 "namespaces": [ 00:13:46.777 { 00:13:46.777 "nsid": 1, 00:13:46.777 "bdev_name": "Malloc1", 00:13:46.777 "name": "Malloc1", 00:13:46.777 "nguid": "7E635452542C41E2A6EBB8F66401E07B", 00:13:46.777 "uuid": "7e635452-542c-41e2-a6eb-b8f66401e07b" 00:13:46.777 }, 00:13:46.777 { 00:13:46.777 "nsid": 2, 00:13:46.777 "bdev_name": "Malloc3", 00:13:46.777 "name": "Malloc3", 00:13:46.777 "nguid": "F36BC2465FAA4D5381BB03B9E0E996DB", 00:13:46.777 "uuid": "f36bc246-5faa-4d53-81bb-03b9e0e996db" 00:13:46.777 } 00:13:46.777 ] 00:13:46.777 }, 00:13:46.777 { 00:13:46.777 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:46.777 "subtype": "NVMe", 00:13:46.777 "listen_addresses": [ 00:13:46.777 { 00:13:46.777 "trtype": "VFIOUSER", 00:13:46.777 "adrfam": "IPv4", 00:13:46.777 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:46.777 "trsvcid": "0" 00:13:46.777 } 00:13:46.777 ], 00:13:46.777 "allow_any_host": true, 00:13:46.777 "hosts": [], 00:13:46.777 "serial_number": "SPDK2", 00:13:46.777 "model_number": "SPDK bdev Controller", 00:13:46.777 "max_namespaces": 32, 00:13:46.777 "min_cntlid": 1, 00:13:46.777 "max_cntlid": 65519, 00:13:46.777 "namespaces": [ 00:13:46.777 { 00:13:46.777 "nsid": 1, 00:13:46.777 "bdev_name": "Malloc2", 00:13:46.777 "name": "Malloc2", 00:13:46.777 "nguid": "794CFF3DC8D244F5A28A39D01D3CB54B", 00:13:46.777 "uuid": "794cff3d-c8d2-44f5-a28a-39d01d3cb54b" 00:13:46.777 } 00:13:46.777 ] 00:13:46.777 } 00:13:46.777 ] 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3307527 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:46.777 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:46.777 [2024-11-20 09:04:23.752761] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:47.035 Malloc4 00:13:47.035 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:47.292 [2024-11-20 09:04:24.146900] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:47.292 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:47.292 Asynchronous Event Request test 00:13:47.292 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:47.292 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:47.292 Registering asynchronous event callbacks... 00:13:47.292 Starting namespace attribute notice tests for all controllers... 00:13:47.292 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:47.292 aer_cb - Changed Namespace 00:13:47.292 Cleaning up... 00:13:47.549 [ 00:13:47.550 { 00:13:47.550 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:47.550 "subtype": "Discovery", 00:13:47.550 "listen_addresses": [], 00:13:47.550 "allow_any_host": true, 00:13:47.550 "hosts": [] 00:13:47.550 }, 00:13:47.550 { 00:13:47.550 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:47.550 "subtype": "NVMe", 00:13:47.550 "listen_addresses": [ 00:13:47.550 { 00:13:47.550 "trtype": "VFIOUSER", 00:13:47.550 "adrfam": "IPv4", 00:13:47.550 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:47.550 "trsvcid": "0" 00:13:47.550 } 00:13:47.550 ], 00:13:47.550 "allow_any_host": true, 00:13:47.550 "hosts": [], 00:13:47.550 "serial_number": "SPDK1", 00:13:47.550 "model_number": "SPDK bdev Controller", 00:13:47.550 "max_namespaces": 32, 00:13:47.550 "min_cntlid": 1, 00:13:47.550 "max_cntlid": 65519, 00:13:47.550 "namespaces": [ 00:13:47.550 { 00:13:47.550 "nsid": 1, 00:13:47.550 "bdev_name": "Malloc1", 00:13:47.550 "name": "Malloc1", 00:13:47.550 "nguid": "7E635452542C41E2A6EBB8F66401E07B", 00:13:47.550 "uuid": "7e635452-542c-41e2-a6eb-b8f66401e07b" 00:13:47.550 }, 00:13:47.550 { 00:13:47.550 "nsid": 2, 00:13:47.550 "bdev_name": "Malloc3", 00:13:47.550 "name": "Malloc3", 00:13:47.550 "nguid": "F36BC2465FAA4D5381BB03B9E0E996DB", 00:13:47.550 "uuid": "f36bc246-5faa-4d53-81bb-03b9e0e996db" 00:13:47.550 } 00:13:47.550 ] 00:13:47.550 }, 00:13:47.550 { 00:13:47.550 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:47.550 "subtype": "NVMe", 00:13:47.550 "listen_addresses": [ 00:13:47.550 { 00:13:47.550 "trtype": "VFIOUSER", 00:13:47.550 "adrfam": "IPv4", 00:13:47.550 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:47.550 "trsvcid": "0" 00:13:47.550 } 00:13:47.550 ], 00:13:47.550 "allow_any_host": true, 00:13:47.550 "hosts": [], 00:13:47.550 "serial_number": "SPDK2", 00:13:47.550 "model_number": "SPDK bdev Controller", 00:13:47.550 "max_namespaces": 32, 00:13:47.550 "min_cntlid": 1, 00:13:47.550 "max_cntlid": 65519, 00:13:47.550 "namespaces": [ 00:13:47.550 { 00:13:47.550 "nsid": 1, 00:13:47.550 "bdev_name": "Malloc2", 00:13:47.550 "name": "Malloc2", 00:13:47.550 "nguid": "794CFF3DC8D244F5A28A39D01D3CB54B", 00:13:47.550 "uuid": "794cff3d-c8d2-44f5-a28a-39d01d3cb54b" 00:13:47.550 }, 00:13:47.550 { 00:13:47.550 "nsid": 2, 00:13:47.550 "bdev_name": "Malloc4", 00:13:47.550 "name": "Malloc4", 00:13:47.550 "nguid": "273FBC3E02164B04B47C2BE0F8F30881", 00:13:47.550 "uuid": "273fbc3e-0216-4b04-b47c-2be0f8f30881" 00:13:47.550 } 00:13:47.550 ] 00:13:47.550 } 00:13:47.550 ] 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3307527 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3301926 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3301926 ']' 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3301926 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3301926 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3301926' 00:13:47.550 killing process with pid 3301926 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3301926 00:13:47.550 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3301926 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3307671 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3307671' 00:13:47.810 Process pid: 3307671 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3307671 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 3307671 ']' 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:47.810 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:48.069 [2024-11-20 09:04:24.855729] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:48.069 [2024-11-20 09:04:24.856820] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:13:48.069 [2024-11-20 09:04:24.856894] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.069 [2024-11-20 09:04:24.921879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:48.069 [2024-11-20 09:04:24.975080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.069 [2024-11-20 09:04:24.975134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.069 [2024-11-20 09:04:24.975154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.069 [2024-11-20 09:04:24.975172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.069 [2024-11-20 09:04:24.975186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.069 [2024-11-20 09:04:24.976687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.069 [2024-11-20 09:04:24.976744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.069 [2024-11-20 09:04:24.976809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.069 [2024-11-20 09:04:24.976813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.069 [2024-11-20 09:04:25.062844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:48.069 [2024-11-20 09:04:25.063042] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:48.069 [2024-11-20 09:04:25.063357] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:48.070 [2024-11-20 09:04:25.063995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:48.070 [2024-11-20 09:04:25.064217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:48.070 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:48.070 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:13:48.070 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:49.444 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:49.444 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:49.444 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:49.444 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.444 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:49.444 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:50.009 Malloc1 00:13:50.009 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:50.265 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:50.522 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:50.779 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:50.779 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:50.779 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:51.344 Malloc2 00:13:51.344 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:51.601 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:51.858 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3307671 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 3307671 ']' 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 3307671 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3307671 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3307671' 00:13:52.116 killing process with pid 3307671 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 3307671 00:13:52.116 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 3307671 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:52.372 00:13:52.372 real 0m53.759s 00:13:52.372 user 3m27.332s 00:13:52.372 sys 0m4.024s 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:52.372 ************************************ 00:13:52.372 END TEST nvmf_vfio_user 00:13:52.372 ************************************ 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.372 ************************************ 00:13:52.372 START TEST nvmf_vfio_user_nvme_compliance 00:13:52.372 ************************************ 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:52.372 * Looking for test storage... 00:13:52.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:13:52.372 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.630 --rc genhtml_branch_coverage=1 00:13:52.630 --rc genhtml_function_coverage=1 00:13:52.630 --rc genhtml_legend=1 00:13:52.630 --rc geninfo_all_blocks=1 00:13:52.630 --rc geninfo_unexecuted_blocks=1 00:13:52.630 00:13:52.630 ' 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.630 --rc genhtml_branch_coverage=1 00:13:52.630 --rc genhtml_function_coverage=1 00:13:52.630 --rc genhtml_legend=1 00:13:52.630 --rc geninfo_all_blocks=1 00:13:52.630 --rc geninfo_unexecuted_blocks=1 00:13:52.630 00:13:52.630 ' 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.630 --rc genhtml_branch_coverage=1 00:13:52.630 --rc genhtml_function_coverage=1 00:13:52.630 --rc genhtml_legend=1 00:13:52.630 --rc geninfo_all_blocks=1 00:13:52.630 --rc geninfo_unexecuted_blocks=1 00:13:52.630 00:13:52.630 ' 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.630 --rc genhtml_branch_coverage=1 00:13:52.630 --rc genhtml_function_coverage=1 00:13:52.630 --rc genhtml_legend=1 00:13:52.630 --rc geninfo_all_blocks=1 00:13:52.630 --rc geninfo_unexecuted_blocks=1 00:13:52.630 00:13:52.630 ' 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.630 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3308285 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3308285' 00:13:52.631 Process pid: 3308285 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3308285 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 3308285 ']' 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.631 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:52.631 [2024-11-20 09:04:29.533479] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:13:52.631 [2024-11-20 09:04:29.533574] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.631 [2024-11-20 09:04:29.604112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.889 [2024-11-20 09:04:29.666401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.889 [2024-11-20 09:04:29.666445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.889 [2024-11-20 09:04:29.666473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.889 [2024-11-20 09:04:29.666484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.889 [2024-11-20 09:04:29.666493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.889 [2024-11-20 09:04:29.667850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.889 [2024-11-20 09:04:29.667933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.889 [2024-11-20 09:04:29.667938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.889 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:52.889 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:13:52.889 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:53.821 malloc0 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.821 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:54.078 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.078 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:54.078 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.078 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:54.078 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.078 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:54.078 00:13:54.078 00:13:54.078 CUnit - A unit testing framework for C - Version 2.1-3 00:13:54.078 http://cunit.sourceforge.net/ 00:13:54.078 00:13:54.078 00:13:54.078 Suite: nvme_compliance 00:13:54.078 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 09:04:31.035518] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.078 [2024-11-20 09:04:31.037007] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:54.078 [2024-11-20 09:04:31.037031] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:54.078 [2024-11-20 09:04:31.037058] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:54.078 [2024-11-20 09:04:31.038539] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.078 passed 00:13:54.336 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 09:04:31.122108] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.336 [2024-11-20 09:04:31.125127] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.336 passed 00:13:54.336 Test: admin_identify_ns ...[2024-11-20 09:04:31.211788] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.336 [2024-11-20 09:04:31.275326] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:54.336 [2024-11-20 09:04:31.283323] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:54.336 [2024-11-20 09:04:31.304442] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.336 passed 00:13:54.593 Test: admin_get_features_mandatory_features ...[2024-11-20 09:04:31.388308] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.593 [2024-11-20 09:04:31.391329] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.593 passed 00:13:54.593 Test: admin_get_features_optional_features ...[2024-11-20 09:04:31.475871] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.593 [2024-11-20 09:04:31.478890] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.593 passed 00:13:54.593 Test: admin_set_features_number_of_queues ...[2024-11-20 09:04:31.560116] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.850 [2024-11-20 09:04:31.664423] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.850 passed 00:13:54.851 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 09:04:31.748004] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.851 [2024-11-20 09:04:31.754040] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.851 passed 00:13:54.851 Test: admin_get_log_page_with_lpo ...[2024-11-20 09:04:31.837213] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.109 [2024-11-20 09:04:31.905322] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:55.109 [2024-11-20 09:04:31.918406] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.109 passed 00:13:55.109 Test: fabric_property_get ...[2024-11-20 09:04:31.998005] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.109 [2024-11-20 09:04:31.999297] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:55.109 [2024-11-20 09:04:32.001027] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.109 passed 00:13:55.109 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 09:04:32.086614] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.109 [2024-11-20 09:04:32.087894] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:55.109 [2024-11-20 09:04:32.089623] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.109 passed 00:13:55.366 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 09:04:32.173817] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.366 [2024-11-20 09:04:32.257329] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:55.366 [2024-11-20 09:04:32.273313] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:55.366 [2024-11-20 09:04:32.278428] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.366 passed 00:13:55.366 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 09:04:32.362060] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.366 [2024-11-20 09:04:32.363396] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:55.366 [2024-11-20 09:04:32.365082] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.623 passed 00:13:55.624 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 09:04:32.449858] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.624 [2024-11-20 09:04:32.525314] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:55.624 [2024-11-20 09:04:32.549317] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:55.624 [2024-11-20 09:04:32.554437] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.624 passed 00:13:55.624 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 09:04:32.635591] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.624 [2024-11-20 09:04:32.636914] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:55.624 [2024-11-20 09:04:32.636952] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:55.624 [2024-11-20 09:04:32.638621] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.881 passed 00:13:55.881 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 09:04:32.724904] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.881 [2024-11-20 09:04:32.816330] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:55.881 [2024-11-20 09:04:32.824315] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:55.881 [2024-11-20 09:04:32.832315] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:55.881 [2024-11-20 09:04:32.840315] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:55.881 [2024-11-20 09:04:32.869428] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.881 passed 00:13:56.139 Test: admin_create_io_sq_verify_pc ...[2024-11-20 09:04:32.953118] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:56.139 [2024-11-20 09:04:32.969328] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:56.139 [2024-11-20 09:04:32.986617] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:56.139 passed 00:13:56.139 Test: admin_create_io_qp_max_qps ...[2024-11-20 09:04:33.070195] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:57.512 [2024-11-20 09:04:34.163321] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:57.769 [2024-11-20 09:04:34.550957] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:57.769 passed 00:13:57.769 Test: admin_create_io_sq_shared_cq ...[2024-11-20 09:04:34.635859] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:57.769 [2024-11-20 09:04:34.767325] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:58.027 [2024-11-20 09:04:34.804414] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:58.028 passed 00:13:58.028 00:13:58.028 Run Summary: Type Total Ran Passed Failed Inactive 00:13:58.028 suites 1 1 n/a 0 0 00:13:58.028 tests 18 18 18 0 0 00:13:58.028 asserts 360 360 360 0 n/a 00:13:58.028 00:13:58.028 Elapsed time = 1.562 seconds 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3308285 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 3308285 ']' 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 3308285 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3308285 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3308285' 00:13:58.028 killing process with pid 3308285 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 3308285 00:13:58.028 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 3308285 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:58.288 00:13:58.288 real 0m5.827s 00:13:58.288 user 0m16.308s 00:13:58.288 sys 0m0.548s 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:58.288 ************************************ 00:13:58.288 END TEST nvmf_vfio_user_nvme_compliance 00:13:58.288 ************************************ 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.288 ************************************ 00:13:58.288 START TEST nvmf_vfio_user_fuzz 00:13:58.288 ************************************ 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:58.288 * Looking for test storage... 00:13:58.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:13:58.288 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.547 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:58.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.547 --rc genhtml_branch_coverage=1 00:13:58.547 --rc genhtml_function_coverage=1 00:13:58.547 --rc genhtml_legend=1 00:13:58.548 --rc geninfo_all_blocks=1 00:13:58.548 --rc geninfo_unexecuted_blocks=1 00:13:58.548 00:13:58.548 ' 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:58.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.548 --rc genhtml_branch_coverage=1 00:13:58.548 --rc genhtml_function_coverage=1 00:13:58.548 --rc genhtml_legend=1 00:13:58.548 --rc geninfo_all_blocks=1 00:13:58.548 --rc geninfo_unexecuted_blocks=1 00:13:58.548 00:13:58.548 ' 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:58.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.548 --rc genhtml_branch_coverage=1 00:13:58.548 --rc genhtml_function_coverage=1 00:13:58.548 --rc genhtml_legend=1 00:13:58.548 --rc geninfo_all_blocks=1 00:13:58.548 --rc geninfo_unexecuted_blocks=1 00:13:58.548 00:13:58.548 ' 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:58.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.548 --rc genhtml_branch_coverage=1 00:13:58.548 --rc genhtml_function_coverage=1 00:13:58.548 --rc genhtml_legend=1 00:13:58.548 --rc geninfo_all_blocks=1 00:13:58.548 --rc geninfo_unexecuted_blocks=1 00:13:58.548 00:13:58.548 ' 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3309129 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3309129' 00:13:58.548 Process pid: 3309129 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3309129 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 3309129 ']' 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:58.548 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.807 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:58.807 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:13:58.807 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:59.741 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:59.741 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.741 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.741 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.741 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:59.741 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:59.741 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.741 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.741 malloc0 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:59.742 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:31.804 Fuzzing completed. Shutting down the fuzz application 00:14:31.804 00:14:31.804 Dumping successful admin opcodes: 00:14:31.804 8, 9, 10, 24, 00:14:31.804 Dumping successful io opcodes: 00:14:31.804 0, 00:14:31.804 NS: 0x20000081ef00 I/O qp, Total commands completed: 643676, total successful commands: 2500, random_seed: 1395663168 00:14:31.804 NS: 0x20000081ef00 admin qp, Total commands completed: 136143, total successful commands: 1107, random_seed: 618384448 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3309129 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 3309129 ']' 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 3309129 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3309129 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3309129' 00:14:31.804 killing process with pid 3309129 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 3309129 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 3309129 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:31.804 00:14:31.804 real 0m32.305s 00:14:31.804 user 0m34.941s 00:14:31.804 sys 0m26.530s 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:31.804 ************************************ 00:14:31.804 END TEST nvmf_vfio_user_fuzz 00:14:31.804 ************************************ 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.804 ************************************ 00:14:31.804 START TEST nvmf_auth_target 00:14:31.804 ************************************ 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:31.804 * Looking for test storage... 00:14:31.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.804 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:31.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.805 --rc genhtml_branch_coverage=1 00:14:31.805 --rc genhtml_function_coverage=1 00:14:31.805 --rc genhtml_legend=1 00:14:31.805 --rc geninfo_all_blocks=1 00:14:31.805 --rc geninfo_unexecuted_blocks=1 00:14:31.805 00:14:31.805 ' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:31.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.805 --rc genhtml_branch_coverage=1 00:14:31.805 --rc genhtml_function_coverage=1 00:14:31.805 --rc genhtml_legend=1 00:14:31.805 --rc geninfo_all_blocks=1 00:14:31.805 --rc geninfo_unexecuted_blocks=1 00:14:31.805 00:14:31.805 ' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:31.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.805 --rc genhtml_branch_coverage=1 00:14:31.805 --rc genhtml_function_coverage=1 00:14:31.805 --rc genhtml_legend=1 00:14:31.805 --rc geninfo_all_blocks=1 00:14:31.805 --rc geninfo_unexecuted_blocks=1 00:14:31.805 00:14:31.805 ' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:31.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.805 --rc genhtml_branch_coverage=1 00:14:31.805 --rc genhtml_function_coverage=1 00:14:31.805 --rc genhtml_legend=1 00:14:31.805 --rc geninfo_all_blocks=1 00:14:31.805 --rc geninfo_unexecuted_blocks=1 00:14:31.805 00:14:31.805 ' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.805 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.806 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.806 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.806 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.806 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:31.806 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:31.806 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:31.806 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:33.188 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:33.188 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:33.188 Found net devices under 0000:09:00.0: cvl_0_0 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.188 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:33.189 Found net devices under 0000:09:00.1: cvl_0_1 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:33.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:14:33.189 00:14:33.189 --- 10.0.0.2 ping statistics --- 00:14:33.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.189 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:14:33.189 00:14:33.189 --- 10.0.0.1 ping statistics --- 00:14:33.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.189 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3314474 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3314474 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3314474 ']' 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:33.189 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3314499 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.447 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=649b3f8b1e0eccfa7679ed9487fd8adfe2977ed2e583156f 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.elz 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 649b3f8b1e0eccfa7679ed9487fd8adfe2977ed2e583156f 0 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 649b3f8b1e0eccfa7679ed9487fd8adfe2977ed2e583156f 0 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=649b3f8b1e0eccfa7679ed9487fd8adfe2977ed2e583156f 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.elz 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.elz 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.elz 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d27c979227528039315c74b31718e36ddc6536af82ca934c866c1a5995e2e8f6 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pDa 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d27c979227528039315c74b31718e36ddc6536af82ca934c866c1a5995e2e8f6 3 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d27c979227528039315c74b31718e36ddc6536af82ca934c866c1a5995e2e8f6 3 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d27c979227528039315c74b31718e36ddc6536af82ca934c866c1a5995e2e8f6 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pDa 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pDa 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.pDa 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8fdcc01dc53b134818a19e33ea7a24cc 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0Ke 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8fdcc01dc53b134818a19e33ea7a24cc 1 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8fdcc01dc53b134818a19e33ea7a24cc 1 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8fdcc01dc53b134818a19e33ea7a24cc 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:33.448 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0Ke 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0Ke 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.0Ke 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e27f7bf212937f7b6356b619dd68de2478bf8fdf33700a52 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.32b 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e27f7bf212937f7b6356b619dd68de2478bf8fdf33700a52 2 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e27f7bf212937f7b6356b619dd68de2478bf8fdf33700a52 2 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e27f7bf212937f7b6356b619dd68de2478bf8fdf33700a52 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.32b 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.32b 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.32b 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=560ebc0656ec4d73f5f5ff23f610eed3db0a69598ecd8b6e 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.q2n 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 560ebc0656ec4d73f5f5ff23f610eed3db0a69598ecd8b6e 2 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 560ebc0656ec4d73f5f5ff23f610eed3db0a69598ecd8b6e 2 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=560ebc0656ec4d73f5f5ff23f610eed3db0a69598ecd8b6e 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.q2n 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.q2n 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.q2n 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b73f00dbd5f3f9af36f71c80c410d007 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5zo 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b73f00dbd5f3f9af36f71c80c410d007 1 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b73f00dbd5f3f9af36f71c80c410d007 1 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b73f00dbd5f3f9af36f71c80c410d007 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5zo 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5zo 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.5zo 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d474333550bafc04a252f387d352718d72c9d708bb7a5f946ae20538196ce4f5 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Vy4 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d474333550bafc04a252f387d352718d72c9d708bb7a5f946ae20538196ce4f5 3 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d474333550bafc04a252f387d352718d72c9d708bb7a5f946ae20538196ce4f5 3 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d474333550bafc04a252f387d352718d72c9d708bb7a5f946ae20538196ce4f5 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Vy4 00:14:33.707 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Vy4 00:14:33.708 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Vy4 00:14:33.708 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:33.708 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3314474 00:14:33.708 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3314474 ']' 00:14:33.708 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.708 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:33.708 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.708 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:33.708 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.966 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:33.966 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:33.966 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3314499 /var/tmp/host.sock 00:14:33.966 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3314499 ']' 00:14:33.966 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:33.966 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:33.966 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:33.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:33.966 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:33.966 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.elz 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.223 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.481 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.481 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.elz 00:14:34.481 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.elz 00:14:34.739 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.pDa ]] 00:14:34.739 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pDa 00:14:34.739 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.740 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.740 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.740 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pDa 00:14:34.740 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pDa 00:14:34.997 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:34.997 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0Ke 00:14:34.997 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.997 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.997 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.997 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0Ke 00:14:34.998 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0Ke 00:14:35.255 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.32b ]] 00:14:35.255 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.32b 00:14:35.255 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.255 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.255 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.255 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.32b 00:14:35.255 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.32b 00:14:35.513 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:35.514 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.q2n 00:14:35.514 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.514 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.514 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.514 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.q2n 00:14:35.514 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.q2n 00:14:35.772 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.5zo ]] 00:14:35.772 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5zo 00:14:35.772 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.772 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.772 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.772 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5zo 00:14:35.772 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5zo 00:14:36.030 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:36.030 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Vy4 00:14:36.030 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.030 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.030 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.030 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Vy4 00:14:36.030 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Vy4 00:14:36.318 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:36.318 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:36.318 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.318 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.318 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.318 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.624 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.625 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.906 00:14:36.906 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.906 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.907 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.165 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.165 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.165 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.165 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.423 { 00:14:37.423 "cntlid": 1, 00:14:37.423 "qid": 0, 00:14:37.423 "state": "enabled", 00:14:37.423 "thread": "nvmf_tgt_poll_group_000", 00:14:37.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:37.423 "listen_address": { 00:14:37.423 "trtype": "TCP", 00:14:37.423 "adrfam": "IPv4", 00:14:37.423 "traddr": "10.0.0.2", 00:14:37.423 "trsvcid": "4420" 00:14:37.423 }, 00:14:37.423 "peer_address": { 00:14:37.423 "trtype": "TCP", 00:14:37.423 "adrfam": "IPv4", 00:14:37.423 "traddr": "10.0.0.1", 00:14:37.423 "trsvcid": "50826" 00:14:37.423 }, 00:14:37.423 "auth": { 00:14:37.423 "state": "completed", 00:14:37.423 "digest": "sha256", 00:14:37.423 "dhgroup": "null" 00:14:37.423 } 00:14:37.423 } 00:14:37.423 ]' 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.423 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.681 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:14:37.681 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:14:38.615 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.615 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:38.615 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.615 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.615 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.615 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.615 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.615 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.874 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.875 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.441 00:14:39.441 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.441 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.441 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.699 { 00:14:39.699 "cntlid": 3, 00:14:39.699 "qid": 0, 00:14:39.699 "state": "enabled", 00:14:39.699 "thread": "nvmf_tgt_poll_group_000", 00:14:39.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:39.699 "listen_address": { 00:14:39.699 "trtype": "TCP", 00:14:39.699 "adrfam": "IPv4", 00:14:39.699 "traddr": "10.0.0.2", 00:14:39.699 "trsvcid": "4420" 00:14:39.699 }, 00:14:39.699 "peer_address": { 00:14:39.699 "trtype": "TCP", 00:14:39.699 "adrfam": "IPv4", 00:14:39.699 "traddr": "10.0.0.1", 00:14:39.699 "trsvcid": "57548" 00:14:39.699 }, 00:14:39.699 "auth": { 00:14:39.699 "state": "completed", 00:14:39.699 "digest": "sha256", 00:14:39.699 "dhgroup": "null" 00:14:39.699 } 00:14:39.699 } 00:14:39.699 ]' 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.699 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.958 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:14:39.958 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:14:40.892 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.892 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.892 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.892 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.892 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.892 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.892 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:40.892 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.150 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.408 00:14:41.665 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.665 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.665 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.924 { 00:14:41.924 "cntlid": 5, 00:14:41.924 "qid": 0, 00:14:41.924 "state": "enabled", 00:14:41.924 "thread": "nvmf_tgt_poll_group_000", 00:14:41.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:41.924 "listen_address": { 00:14:41.924 "trtype": "TCP", 00:14:41.924 "adrfam": "IPv4", 00:14:41.924 "traddr": "10.0.0.2", 00:14:41.924 "trsvcid": "4420" 00:14:41.924 }, 00:14:41.924 "peer_address": { 00:14:41.924 "trtype": "TCP", 00:14:41.924 "adrfam": "IPv4", 00:14:41.924 "traddr": "10.0.0.1", 00:14:41.924 "trsvcid": "57566" 00:14:41.924 }, 00:14:41.924 "auth": { 00:14:41.924 "state": "completed", 00:14:41.924 "digest": "sha256", 00:14:41.924 "dhgroup": "null" 00:14:41.924 } 00:14:41.924 } 00:14:41.924 ]' 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.924 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.182 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:14:42.182 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:14:43.115 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.116 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:43.116 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.116 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.116 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.116 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.116 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:43.116 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.374 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.632 00:14:43.632 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.632 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.632 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.198 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.198 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.198 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.198 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.198 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.198 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.198 { 00:14:44.198 "cntlid": 7, 00:14:44.198 "qid": 0, 00:14:44.198 "state": "enabled", 00:14:44.198 "thread": "nvmf_tgt_poll_group_000", 00:14:44.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:44.198 "listen_address": { 00:14:44.198 "trtype": "TCP", 00:14:44.198 "adrfam": "IPv4", 00:14:44.198 "traddr": "10.0.0.2", 00:14:44.198 "trsvcid": "4420" 00:14:44.198 }, 00:14:44.198 "peer_address": { 00:14:44.198 "trtype": "TCP", 00:14:44.198 "adrfam": "IPv4", 00:14:44.198 "traddr": "10.0.0.1", 00:14:44.198 "trsvcid": "57590" 00:14:44.198 }, 00:14:44.198 "auth": { 00:14:44.198 "state": "completed", 00:14:44.198 "digest": "sha256", 00:14:44.198 "dhgroup": "null" 00:14:44.198 } 00:14:44.198 } 00:14:44.198 ]' 00:14:44.198 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.198 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.198 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.198 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:44.198 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.198 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.198 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.198 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.456 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:14:44.456 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:14:45.390 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.390 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:45.390 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.390 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.390 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.390 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.390 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.390 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.390 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.648 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.906 00:14:45.906 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.906 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.906 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.164 { 00:14:46.164 "cntlid": 9, 00:14:46.164 "qid": 0, 00:14:46.164 "state": "enabled", 00:14:46.164 "thread": "nvmf_tgt_poll_group_000", 00:14:46.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:46.164 "listen_address": { 00:14:46.164 "trtype": "TCP", 00:14:46.164 "adrfam": "IPv4", 00:14:46.164 "traddr": "10.0.0.2", 00:14:46.164 "trsvcid": "4420" 00:14:46.164 }, 00:14:46.164 "peer_address": { 00:14:46.164 "trtype": "TCP", 00:14:46.164 "adrfam": "IPv4", 00:14:46.164 "traddr": "10.0.0.1", 00:14:46.164 "trsvcid": "57612" 00:14:46.164 }, 00:14:46.164 "auth": { 00:14:46.164 "state": "completed", 00:14:46.164 "digest": "sha256", 00:14:46.164 "dhgroup": "ffdhe2048" 00:14:46.164 } 00:14:46.164 } 00:14:46.164 ]' 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:46.164 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.423 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.423 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.423 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.680 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:14:46.680 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:14:47.613 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.613 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:47.613 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.613 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.613 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.613 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.613 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.613 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.871 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.129 00:14:48.129 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.129 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.129 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.386 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.386 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.386 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.386 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.386 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.386 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.386 { 00:14:48.386 "cntlid": 11, 00:14:48.386 "qid": 0, 00:14:48.386 "state": "enabled", 00:14:48.386 "thread": "nvmf_tgt_poll_group_000", 00:14:48.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:48.386 "listen_address": { 00:14:48.386 "trtype": "TCP", 00:14:48.386 "adrfam": "IPv4", 00:14:48.386 "traddr": "10.0.0.2", 00:14:48.386 "trsvcid": "4420" 00:14:48.386 }, 00:14:48.386 "peer_address": { 00:14:48.386 "trtype": "TCP", 00:14:48.386 "adrfam": "IPv4", 00:14:48.386 "traddr": "10.0.0.1", 00:14:48.386 "trsvcid": "57634" 00:14:48.386 }, 00:14:48.386 "auth": { 00:14:48.386 "state": "completed", 00:14:48.386 "digest": "sha256", 00:14:48.386 "dhgroup": "ffdhe2048" 00:14:48.386 } 00:14:48.386 } 00:14:48.386 ]' 00:14:48.386 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.644 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.644 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.644 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.644 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.644 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.644 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.644 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.902 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:14:48.902 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:14:49.836 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.836 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:49.836 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.836 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.836 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.836 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.836 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.836 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.094 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.351 00:14:50.351 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.351 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.351 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.608 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.608 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.608 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.608 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.608 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.608 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.608 { 00:14:50.608 "cntlid": 13, 00:14:50.608 "qid": 0, 00:14:50.608 "state": "enabled", 00:14:50.608 "thread": "nvmf_tgt_poll_group_000", 00:14:50.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:50.608 "listen_address": { 00:14:50.608 "trtype": "TCP", 00:14:50.608 "adrfam": "IPv4", 00:14:50.608 "traddr": "10.0.0.2", 00:14:50.608 "trsvcid": "4420" 00:14:50.608 }, 00:14:50.608 "peer_address": { 00:14:50.608 "trtype": "TCP", 00:14:50.608 "adrfam": "IPv4", 00:14:50.608 "traddr": "10.0.0.1", 00:14:50.608 "trsvcid": "40480" 00:14:50.608 }, 00:14:50.608 "auth": { 00:14:50.608 "state": "completed", 00:14:50.608 "digest": "sha256", 00:14:50.608 "dhgroup": "ffdhe2048" 00:14:50.608 } 00:14:50.608 } 00:14:50.608 ]' 00:14:50.608 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.608 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.608 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.865 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:50.865 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.865 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.865 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.865 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.123 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:14:51.123 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:14:52.056 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.056 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:52.056 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.056 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.056 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.056 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.056 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:52.056 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.314 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.571 00:14:52.571 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.571 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.571 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.829 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.829 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.829 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.829 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.829 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.829 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.829 { 00:14:52.829 "cntlid": 15, 00:14:52.829 "qid": 0, 00:14:52.829 "state": "enabled", 00:14:52.829 "thread": "nvmf_tgt_poll_group_000", 00:14:52.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:52.829 "listen_address": { 00:14:52.829 "trtype": "TCP", 00:14:52.829 "adrfam": "IPv4", 00:14:52.829 "traddr": "10.0.0.2", 00:14:52.829 "trsvcid": "4420" 00:14:52.829 }, 00:14:52.829 "peer_address": { 00:14:52.829 "trtype": "TCP", 00:14:52.829 "adrfam": "IPv4", 00:14:52.829 "traddr": "10.0.0.1", 00:14:52.829 "trsvcid": "40494" 00:14:52.829 }, 00:14:52.829 "auth": { 00:14:52.829 "state": "completed", 00:14:52.829 "digest": "sha256", 00:14:52.829 "dhgroup": "ffdhe2048" 00:14:52.829 } 00:14:52.829 } 00:14:52.829 ]' 00:14:52.829 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.829 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.829 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.088 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:53.088 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.088 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.088 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.088 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.346 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:14:53.346 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:14:54.279 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.279 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.279 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.279 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.279 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.279 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.279 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.279 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.279 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.536 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.795 00:14:54.795 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.795 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.795 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.361 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.362 { 00:14:55.362 "cntlid": 17, 00:14:55.362 "qid": 0, 00:14:55.362 "state": "enabled", 00:14:55.362 "thread": "nvmf_tgt_poll_group_000", 00:14:55.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:55.362 "listen_address": { 00:14:55.362 "trtype": "TCP", 00:14:55.362 "adrfam": "IPv4", 00:14:55.362 "traddr": "10.0.0.2", 00:14:55.362 "trsvcid": "4420" 00:14:55.362 }, 00:14:55.362 "peer_address": { 00:14:55.362 "trtype": "TCP", 00:14:55.362 "adrfam": "IPv4", 00:14:55.362 "traddr": "10.0.0.1", 00:14:55.362 "trsvcid": "40524" 00:14:55.362 }, 00:14:55.362 "auth": { 00:14:55.362 "state": "completed", 00:14:55.362 "digest": "sha256", 00:14:55.362 "dhgroup": "ffdhe3072" 00:14:55.362 } 00:14:55.362 } 00:14:55.362 ]' 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.362 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.619 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:14:55.619 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:14:56.549 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.549 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.549 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.549 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.549 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.549 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.549 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:56.549 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.807 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.064 00:14:57.064 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.064 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.064 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.321 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.321 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.321 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.321 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.321 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.321 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.321 { 00:14:57.321 "cntlid": 19, 00:14:57.321 "qid": 0, 00:14:57.321 "state": "enabled", 00:14:57.321 "thread": "nvmf_tgt_poll_group_000", 00:14:57.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:57.321 "listen_address": { 00:14:57.321 "trtype": "TCP", 00:14:57.321 "adrfam": "IPv4", 00:14:57.321 "traddr": "10.0.0.2", 00:14:57.321 "trsvcid": "4420" 00:14:57.321 }, 00:14:57.321 "peer_address": { 00:14:57.321 "trtype": "TCP", 00:14:57.321 "adrfam": "IPv4", 00:14:57.321 "traddr": "10.0.0.1", 00:14:57.321 "trsvcid": "40548" 00:14:57.321 }, 00:14:57.321 "auth": { 00:14:57.321 "state": "completed", 00:14:57.321 "digest": "sha256", 00:14:57.321 "dhgroup": "ffdhe3072" 00:14:57.321 } 00:14:57.321 } 00:14:57.321 ]' 00:14:57.321 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.577 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.577 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.577 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:57.577 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.577 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.577 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.577 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.834 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:14:57.834 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:14:58.767 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.767 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:58.767 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.767 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.767 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.767 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.767 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:58.767 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.025 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.282 00:14:59.540 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.540 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.540 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.798 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.798 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.798 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.798 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.798 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.798 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.798 { 00:14:59.798 "cntlid": 21, 00:14:59.799 "qid": 0, 00:14:59.799 "state": "enabled", 00:14:59.799 "thread": "nvmf_tgt_poll_group_000", 00:14:59.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:59.799 "listen_address": { 00:14:59.799 "trtype": "TCP", 00:14:59.799 "adrfam": "IPv4", 00:14:59.799 "traddr": "10.0.0.2", 00:14:59.799 "trsvcid": "4420" 00:14:59.799 }, 00:14:59.799 "peer_address": { 00:14:59.799 "trtype": "TCP", 00:14:59.799 "adrfam": "IPv4", 00:14:59.799 "traddr": "10.0.0.1", 00:14:59.799 "trsvcid": "54278" 00:14:59.799 }, 00:14:59.799 "auth": { 00:14:59.799 "state": "completed", 00:14:59.799 "digest": "sha256", 00:14:59.799 "dhgroup": "ffdhe3072" 00:14:59.799 } 00:14:59.799 } 00:14:59.799 ]' 00:14:59.799 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.799 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.799 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.799 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:59.799 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.799 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.799 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.799 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.056 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:00.056 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:00.991 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.991 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.991 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.991 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.991 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.991 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.991 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:00.991 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.293 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.294 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.579 00:15:01.579 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.579 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.579 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.144 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.144 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.144 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.144 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.144 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.144 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.144 { 00:15:02.144 "cntlid": 23, 00:15:02.144 "qid": 0, 00:15:02.144 "state": "enabled", 00:15:02.144 "thread": "nvmf_tgt_poll_group_000", 00:15:02.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:02.144 "listen_address": { 00:15:02.144 "trtype": "TCP", 00:15:02.144 "adrfam": "IPv4", 00:15:02.144 "traddr": "10.0.0.2", 00:15:02.144 "trsvcid": "4420" 00:15:02.144 }, 00:15:02.144 "peer_address": { 00:15:02.144 "trtype": "TCP", 00:15:02.144 "adrfam": "IPv4", 00:15:02.144 "traddr": "10.0.0.1", 00:15:02.144 "trsvcid": "54310" 00:15:02.144 }, 00:15:02.144 "auth": { 00:15:02.144 "state": "completed", 00:15:02.144 "digest": "sha256", 00:15:02.144 "dhgroup": "ffdhe3072" 00:15:02.144 } 00:15:02.144 } 00:15:02.144 ]' 00:15:02.144 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.145 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.145 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.145 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.401 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:02.401 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:03.335 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.335 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:03.335 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.335 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.335 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.335 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.335 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.335 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.335 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.593 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.159 00:15:04.159 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.159 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.159 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.417 { 00:15:04.417 "cntlid": 25, 00:15:04.417 "qid": 0, 00:15:04.417 "state": "enabled", 00:15:04.417 "thread": "nvmf_tgt_poll_group_000", 00:15:04.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:04.417 "listen_address": { 00:15:04.417 "trtype": "TCP", 00:15:04.417 "adrfam": "IPv4", 00:15:04.417 "traddr": "10.0.0.2", 00:15:04.417 "trsvcid": "4420" 00:15:04.417 }, 00:15:04.417 "peer_address": { 00:15:04.417 "trtype": "TCP", 00:15:04.417 "adrfam": "IPv4", 00:15:04.417 "traddr": "10.0.0.1", 00:15:04.417 "trsvcid": "54338" 00:15:04.417 }, 00:15:04.417 "auth": { 00:15:04.417 "state": "completed", 00:15:04.417 "digest": "sha256", 00:15:04.417 "dhgroup": "ffdhe4096" 00:15:04.417 } 00:15:04.417 } 00:15:04.417 ]' 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.417 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.676 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:04.676 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:05.610 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.610 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:05.610 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.610 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.610 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.610 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.610 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.868 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.433 00:15:06.433 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.433 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.433 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.691 { 00:15:06.691 "cntlid": 27, 00:15:06.691 "qid": 0, 00:15:06.691 "state": "enabled", 00:15:06.691 "thread": "nvmf_tgt_poll_group_000", 00:15:06.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:06.691 "listen_address": { 00:15:06.691 "trtype": "TCP", 00:15:06.691 "adrfam": "IPv4", 00:15:06.691 "traddr": "10.0.0.2", 00:15:06.691 "trsvcid": "4420" 00:15:06.691 }, 00:15:06.691 "peer_address": { 00:15:06.691 "trtype": "TCP", 00:15:06.691 "adrfam": "IPv4", 00:15:06.691 "traddr": "10.0.0.1", 00:15:06.691 "trsvcid": "54372" 00:15:06.691 }, 00:15:06.691 "auth": { 00:15:06.691 "state": "completed", 00:15:06.691 "digest": "sha256", 00:15:06.691 "dhgroup": "ffdhe4096" 00:15:06.691 } 00:15:06.691 } 00:15:06.691 ]' 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:06.691 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.692 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.692 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.692 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.949 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:06.949 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:07.882 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.882 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:07.882 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.882 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.882 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.882 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.882 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.882 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:08.140 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:08.140 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.140 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.141 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:08.399 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:08.399 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.399 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.399 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.399 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.399 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.399 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.399 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.399 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.657 00:15:08.657 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.657 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.657 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.915 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.915 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.915 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.915 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.915 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.915 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.915 { 00:15:08.915 "cntlid": 29, 00:15:08.915 "qid": 0, 00:15:08.915 "state": "enabled", 00:15:08.915 "thread": "nvmf_tgt_poll_group_000", 00:15:08.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:08.915 "listen_address": { 00:15:08.915 "trtype": "TCP", 00:15:08.915 "adrfam": "IPv4", 00:15:08.915 "traddr": "10.0.0.2", 00:15:08.915 "trsvcid": "4420" 00:15:08.915 }, 00:15:08.915 "peer_address": { 00:15:08.915 "trtype": "TCP", 00:15:08.915 "adrfam": "IPv4", 00:15:08.915 "traddr": "10.0.0.1", 00:15:08.915 "trsvcid": "54394" 00:15:08.915 }, 00:15:08.915 "auth": { 00:15:08.915 "state": "completed", 00:15:08.915 "digest": "sha256", 00:15:08.915 "dhgroup": "ffdhe4096" 00:15:08.915 } 00:15:08.915 } 00:15:08.915 ]' 00:15:08.915 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.915 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.915 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.173 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:09.173 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.173 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.173 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.173 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.431 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:09.431 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:10.365 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.365 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:10.365 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.365 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.365 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.365 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.365 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:10.365 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.623 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.189 00:15:11.189 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.189 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.189 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.447 { 00:15:11.447 "cntlid": 31, 00:15:11.447 "qid": 0, 00:15:11.447 "state": "enabled", 00:15:11.447 "thread": "nvmf_tgt_poll_group_000", 00:15:11.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:11.447 "listen_address": { 00:15:11.447 "trtype": "TCP", 00:15:11.447 "adrfam": "IPv4", 00:15:11.447 "traddr": "10.0.0.2", 00:15:11.447 "trsvcid": "4420" 00:15:11.447 }, 00:15:11.447 "peer_address": { 00:15:11.447 "trtype": "TCP", 00:15:11.447 "adrfam": "IPv4", 00:15:11.447 "traddr": "10.0.0.1", 00:15:11.447 "trsvcid": "41384" 00:15:11.447 }, 00:15:11.447 "auth": { 00:15:11.447 "state": "completed", 00:15:11.447 "digest": "sha256", 00:15:11.447 "dhgroup": "ffdhe4096" 00:15:11.447 } 00:15:11.447 } 00:15:11.447 ]' 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.447 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.448 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.448 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.448 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.706 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:11.706 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:12.639 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.639 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:12.639 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.639 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.639 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.639 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.639 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.639 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.639 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.897 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.831 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.831 { 00:15:13.831 "cntlid": 33, 00:15:13.831 "qid": 0, 00:15:13.831 "state": "enabled", 00:15:13.831 "thread": "nvmf_tgt_poll_group_000", 00:15:13.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:13.831 "listen_address": { 00:15:13.831 "trtype": "TCP", 00:15:13.831 "adrfam": "IPv4", 00:15:13.831 "traddr": "10.0.0.2", 00:15:13.831 "trsvcid": "4420" 00:15:13.831 }, 00:15:13.831 "peer_address": { 00:15:13.831 "trtype": "TCP", 00:15:13.831 "adrfam": "IPv4", 00:15:13.831 "traddr": "10.0.0.1", 00:15:13.831 "trsvcid": "41416" 00:15:13.831 }, 00:15:13.831 "auth": { 00:15:13.831 "state": "completed", 00:15:13.831 "digest": "sha256", 00:15:13.831 "dhgroup": "ffdhe6144" 00:15:13.831 } 00:15:13.831 } 00:15:13.831 ]' 00:15:13.831 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.090 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.090 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.090 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.090 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.090 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.090 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.090 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.348 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:14.348 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:15.281 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.281 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:15.281 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.281 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.281 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.281 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.281 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:15.282 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.540 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.106 00:15:16.106 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.106 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.106 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.365 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.365 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.365 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.365 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.622 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.622 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.622 { 00:15:16.622 "cntlid": 35, 00:15:16.622 "qid": 0, 00:15:16.622 "state": "enabled", 00:15:16.622 "thread": "nvmf_tgt_poll_group_000", 00:15:16.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:16.622 "listen_address": { 00:15:16.622 "trtype": "TCP", 00:15:16.622 "adrfam": "IPv4", 00:15:16.622 "traddr": "10.0.0.2", 00:15:16.623 "trsvcid": "4420" 00:15:16.623 }, 00:15:16.623 "peer_address": { 00:15:16.623 "trtype": "TCP", 00:15:16.623 "adrfam": "IPv4", 00:15:16.623 "traddr": "10.0.0.1", 00:15:16.623 "trsvcid": "41444" 00:15:16.623 }, 00:15:16.623 "auth": { 00:15:16.623 "state": "completed", 00:15:16.623 "digest": "sha256", 00:15:16.623 "dhgroup": "ffdhe6144" 00:15:16.623 } 00:15:16.623 } 00:15:16.623 ]' 00:15:16.623 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.623 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.623 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.623 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:16.623 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.623 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.623 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.623 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.880 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:16.880 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:17.814 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.814 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:17.814 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.814 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.814 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.814 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.814 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.814 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.072 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.638 00:15:18.638 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.638 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.638 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.897 { 00:15:18.897 "cntlid": 37, 00:15:18.897 "qid": 0, 00:15:18.897 "state": "enabled", 00:15:18.897 "thread": "nvmf_tgt_poll_group_000", 00:15:18.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:18.897 "listen_address": { 00:15:18.897 "trtype": "TCP", 00:15:18.897 "adrfam": "IPv4", 00:15:18.897 "traddr": "10.0.0.2", 00:15:18.897 "trsvcid": "4420" 00:15:18.897 }, 00:15:18.897 "peer_address": { 00:15:18.897 "trtype": "TCP", 00:15:18.897 "adrfam": "IPv4", 00:15:18.897 "traddr": "10.0.0.1", 00:15:18.897 "trsvcid": "41482" 00:15:18.897 }, 00:15:18.897 "auth": { 00:15:18.897 "state": "completed", 00:15:18.897 "digest": "sha256", 00:15:18.897 "dhgroup": "ffdhe6144" 00:15:18.897 } 00:15:18.897 } 00:15:18.897 ]' 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.897 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.155 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.155 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.155 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.413 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:19.413 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:20.347 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.347 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:20.347 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.347 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.347 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.347 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.347 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:20.347 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.643 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.208 00:15:21.208 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.208 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.209 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.467 { 00:15:21.467 "cntlid": 39, 00:15:21.467 "qid": 0, 00:15:21.467 "state": "enabled", 00:15:21.467 "thread": "nvmf_tgt_poll_group_000", 00:15:21.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:21.467 "listen_address": { 00:15:21.467 "trtype": "TCP", 00:15:21.467 "adrfam": "IPv4", 00:15:21.467 "traddr": "10.0.0.2", 00:15:21.467 "trsvcid": "4420" 00:15:21.467 }, 00:15:21.467 "peer_address": { 00:15:21.467 "trtype": "TCP", 00:15:21.467 "adrfam": "IPv4", 00:15:21.467 "traddr": "10.0.0.1", 00:15:21.467 "trsvcid": "38142" 00:15:21.467 }, 00:15:21.467 "auth": { 00:15:21.467 "state": "completed", 00:15:21.467 "digest": "sha256", 00:15:21.467 "dhgroup": "ffdhe6144" 00:15:21.467 } 00:15:21.467 } 00:15:21.467 ]' 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.467 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.725 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:21.725 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:22.658 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.658 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:22.658 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.658 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.658 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.658 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.658 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.658 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.658 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.917 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:22.917 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.917 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.917 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:22.917 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.917 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.917 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.917 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.917 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.175 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.175 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.175 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.175 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.108 00:15:24.108 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.108 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.108 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.108 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.108 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.108 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.108 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.108 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.108 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.108 { 00:15:24.108 "cntlid": 41, 00:15:24.108 "qid": 0, 00:15:24.108 "state": "enabled", 00:15:24.108 "thread": "nvmf_tgt_poll_group_000", 00:15:24.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:24.108 "listen_address": { 00:15:24.108 "trtype": "TCP", 00:15:24.108 "adrfam": "IPv4", 00:15:24.108 "traddr": "10.0.0.2", 00:15:24.108 "trsvcid": "4420" 00:15:24.108 }, 00:15:24.108 "peer_address": { 00:15:24.108 "trtype": "TCP", 00:15:24.108 "adrfam": "IPv4", 00:15:24.108 "traddr": "10.0.0.1", 00:15:24.108 "trsvcid": "38174" 00:15:24.108 }, 00:15:24.108 "auth": { 00:15:24.108 "state": "completed", 00:15:24.108 "digest": "sha256", 00:15:24.108 "dhgroup": "ffdhe8192" 00:15:24.108 } 00:15:24.108 } 00:15:24.108 ]' 00:15:24.108 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.366 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.366 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.366 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.367 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.367 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.367 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.367 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.624 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:24.624 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:25.555 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.555 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:25.555 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.555 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.556 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.556 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.556 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.556 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.814 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.812 00:15:26.812 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.812 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.812 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.070 { 00:15:27.070 "cntlid": 43, 00:15:27.070 "qid": 0, 00:15:27.070 "state": "enabled", 00:15:27.070 "thread": "nvmf_tgt_poll_group_000", 00:15:27.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:27.070 "listen_address": { 00:15:27.070 "trtype": "TCP", 00:15:27.070 "adrfam": "IPv4", 00:15:27.070 "traddr": "10.0.0.2", 00:15:27.070 "trsvcid": "4420" 00:15:27.070 }, 00:15:27.070 "peer_address": { 00:15:27.070 "trtype": "TCP", 00:15:27.070 "adrfam": "IPv4", 00:15:27.070 "traddr": "10.0.0.1", 00:15:27.070 "trsvcid": "38192" 00:15:27.070 }, 00:15:27.070 "auth": { 00:15:27.070 "state": "completed", 00:15:27.070 "digest": "sha256", 00:15:27.070 "dhgroup": "ffdhe8192" 00:15:27.070 } 00:15:27.070 } 00:15:27.070 ]' 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.070 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.328 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:27.328 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:28.261 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.261 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.261 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.261 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.261 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.262 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.262 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.262 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.519 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.452 00:15:29.452 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.452 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.452 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.711 { 00:15:29.711 "cntlid": 45, 00:15:29.711 "qid": 0, 00:15:29.711 "state": "enabled", 00:15:29.711 "thread": "nvmf_tgt_poll_group_000", 00:15:29.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:29.711 "listen_address": { 00:15:29.711 "trtype": "TCP", 00:15:29.711 "adrfam": "IPv4", 00:15:29.711 "traddr": "10.0.0.2", 00:15:29.711 "trsvcid": "4420" 00:15:29.711 }, 00:15:29.711 "peer_address": { 00:15:29.711 "trtype": "TCP", 00:15:29.711 "adrfam": "IPv4", 00:15:29.711 "traddr": "10.0.0.1", 00:15:29.711 "trsvcid": "38224" 00:15:29.711 }, 00:15:29.711 "auth": { 00:15:29.711 "state": "completed", 00:15:29.711 "digest": "sha256", 00:15:29.711 "dhgroup": "ffdhe8192" 00:15:29.711 } 00:15:29.711 } 00:15:29.711 ]' 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.711 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.969 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:29.969 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:30.901 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.901 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:30.901 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.901 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.901 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.901 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.901 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.901 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.162 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.096 00:15:32.096 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.096 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.096 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.364 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.364 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.364 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.364 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.364 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.364 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.364 { 00:15:32.364 "cntlid": 47, 00:15:32.364 "qid": 0, 00:15:32.364 "state": "enabled", 00:15:32.364 "thread": "nvmf_tgt_poll_group_000", 00:15:32.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:32.364 "listen_address": { 00:15:32.364 "trtype": "TCP", 00:15:32.364 "adrfam": "IPv4", 00:15:32.364 "traddr": "10.0.0.2", 00:15:32.364 "trsvcid": "4420" 00:15:32.364 }, 00:15:32.364 "peer_address": { 00:15:32.364 "trtype": "TCP", 00:15:32.364 "adrfam": "IPv4", 00:15:32.364 "traddr": "10.0.0.1", 00:15:32.364 "trsvcid": "56094" 00:15:32.364 }, 00:15:32.364 "auth": { 00:15:32.365 "state": "completed", 00:15:32.365 "digest": "sha256", 00:15:32.365 "dhgroup": "ffdhe8192" 00:15:32.365 } 00:15:32.365 } 00:15:32.365 ]' 00:15:32.365 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.365 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.365 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.365 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:32.365 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.627 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.627 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.627 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.884 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:32.884 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.818 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.076 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.333 00:15:34.333 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.333 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.333 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.899 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.899 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.899 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.899 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.899 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.899 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.899 { 00:15:34.899 "cntlid": 49, 00:15:34.899 "qid": 0, 00:15:34.899 "state": "enabled", 00:15:34.899 "thread": "nvmf_tgt_poll_group_000", 00:15:34.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:34.900 "listen_address": { 00:15:34.900 "trtype": "TCP", 00:15:34.900 "adrfam": "IPv4", 00:15:34.900 "traddr": "10.0.0.2", 00:15:34.900 "trsvcid": "4420" 00:15:34.900 }, 00:15:34.900 "peer_address": { 00:15:34.900 "trtype": "TCP", 00:15:34.900 "adrfam": "IPv4", 00:15:34.900 "traddr": "10.0.0.1", 00:15:34.900 "trsvcid": "56108" 00:15:34.900 }, 00:15:34.900 "auth": { 00:15:34.900 "state": "completed", 00:15:34.900 "digest": "sha384", 00:15:34.900 "dhgroup": "null" 00:15:34.900 } 00:15:34.900 } 00:15:34.900 ]' 00:15:34.900 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.900 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.900 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.900 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:34.900 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.900 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.900 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.900 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:35.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:36.093 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.093 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:36.093 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.093 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.093 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.093 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.093 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:36.093 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.351 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.608 00:15:36.608 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.608 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.608 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.867 { 00:15:36.867 "cntlid": 51, 00:15:36.867 "qid": 0, 00:15:36.867 "state": "enabled", 00:15:36.867 "thread": "nvmf_tgt_poll_group_000", 00:15:36.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:36.867 "listen_address": { 00:15:36.867 "trtype": "TCP", 00:15:36.867 "adrfam": "IPv4", 00:15:36.867 "traddr": "10.0.0.2", 00:15:36.867 "trsvcid": "4420" 00:15:36.867 }, 00:15:36.867 "peer_address": { 00:15:36.867 "trtype": "TCP", 00:15:36.867 "adrfam": "IPv4", 00:15:36.867 "traddr": "10.0.0.1", 00:15:36.867 "trsvcid": "56142" 00:15:36.867 }, 00:15:36.867 "auth": { 00:15:36.867 "state": "completed", 00:15:36.867 "digest": "sha384", 00:15:36.867 "dhgroup": "null" 00:15:36.867 } 00:15:36.867 } 00:15:36.867 ]' 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:36.867 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.126 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.126 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.126 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.384 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:37.384 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:38.319 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.319 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.319 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.319 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.319 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.319 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.319 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:38.319 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.577 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.835 00:15:38.835 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.835 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.835 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.093 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.093 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.093 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.093 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.093 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.093 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.093 { 00:15:39.093 "cntlid": 53, 00:15:39.093 "qid": 0, 00:15:39.093 "state": "enabled", 00:15:39.093 "thread": "nvmf_tgt_poll_group_000", 00:15:39.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:39.093 "listen_address": { 00:15:39.093 "trtype": "TCP", 00:15:39.093 "adrfam": "IPv4", 00:15:39.094 "traddr": "10.0.0.2", 00:15:39.094 "trsvcid": "4420" 00:15:39.094 }, 00:15:39.094 "peer_address": { 00:15:39.094 "trtype": "TCP", 00:15:39.094 "adrfam": "IPv4", 00:15:39.094 "traddr": "10.0.0.1", 00:15:39.094 "trsvcid": "56180" 00:15:39.094 }, 00:15:39.094 "auth": { 00:15:39.094 "state": "completed", 00:15:39.094 "digest": "sha384", 00:15:39.094 "dhgroup": "null" 00:15:39.094 } 00:15:39.094 } 00:15:39.094 ]' 00:15:39.094 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.094 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.094 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.094 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.094 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.094 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.094 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.094 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.660 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:39.660 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.593 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.594 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.594 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.159 00:15:41.159 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.159 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.159 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.417 { 00:15:41.417 "cntlid": 55, 00:15:41.417 "qid": 0, 00:15:41.417 "state": "enabled", 00:15:41.417 "thread": "nvmf_tgt_poll_group_000", 00:15:41.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:41.417 "listen_address": { 00:15:41.417 "trtype": "TCP", 00:15:41.417 "adrfam": "IPv4", 00:15:41.417 "traddr": "10.0.0.2", 00:15:41.417 "trsvcid": "4420" 00:15:41.417 }, 00:15:41.417 "peer_address": { 00:15:41.417 "trtype": "TCP", 00:15:41.417 "adrfam": "IPv4", 00:15:41.417 "traddr": "10.0.0.1", 00:15:41.417 "trsvcid": "44792" 00:15:41.417 }, 00:15:41.417 "auth": { 00:15:41.417 "state": "completed", 00:15:41.417 "digest": "sha384", 00:15:41.417 "dhgroup": "null" 00:15:41.417 } 00:15:41.417 } 00:15:41.417 ]' 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.417 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.676 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:41.676 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:42.609 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.609 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:42.609 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.609 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.609 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.609 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.609 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.609 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.609 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.867 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.126 00:15:43.126 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.126 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.126 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.692 { 00:15:43.692 "cntlid": 57, 00:15:43.692 "qid": 0, 00:15:43.692 "state": "enabled", 00:15:43.692 "thread": "nvmf_tgt_poll_group_000", 00:15:43.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:43.692 "listen_address": { 00:15:43.692 "trtype": "TCP", 00:15:43.692 "adrfam": "IPv4", 00:15:43.692 "traddr": "10.0.0.2", 00:15:43.692 "trsvcid": "4420" 00:15:43.692 }, 00:15:43.692 "peer_address": { 00:15:43.692 "trtype": "TCP", 00:15:43.692 "adrfam": "IPv4", 00:15:43.692 "traddr": "10.0.0.1", 00:15:43.692 "trsvcid": "44820" 00:15:43.692 }, 00:15:43.692 "auth": { 00:15:43.692 "state": "completed", 00:15:43.692 "digest": "sha384", 00:15:43.692 "dhgroup": "ffdhe2048" 00:15:43.692 } 00:15:43.692 } 00:15:43.692 ]' 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.692 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.950 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:43.950 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:44.884 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.884 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:44.884 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.884 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.884 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.884 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.884 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:44.884 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:45.142 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:45.142 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.142 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.142 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:45.142 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.142 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.142 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.142 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.142 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.142 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.142 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.142 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.142 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.400 00:15:45.400 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.400 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.400 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.658 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.658 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.658 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.658 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.658 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.658 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.658 { 00:15:45.658 "cntlid": 59, 00:15:45.658 "qid": 0, 00:15:45.658 "state": "enabled", 00:15:45.658 "thread": "nvmf_tgt_poll_group_000", 00:15:45.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:45.658 "listen_address": { 00:15:45.658 "trtype": "TCP", 00:15:45.658 "adrfam": "IPv4", 00:15:45.658 "traddr": "10.0.0.2", 00:15:45.658 "trsvcid": "4420" 00:15:45.658 }, 00:15:45.658 "peer_address": { 00:15:45.658 "trtype": "TCP", 00:15:45.658 "adrfam": "IPv4", 00:15:45.658 "traddr": "10.0.0.1", 00:15:45.658 "trsvcid": "44834" 00:15:45.658 }, 00:15:45.658 "auth": { 00:15:45.658 "state": "completed", 00:15:45.658 "digest": "sha384", 00:15:45.658 "dhgroup": "ffdhe2048" 00:15:45.658 } 00:15:45.658 } 00:15:45.658 ]' 00:15:45.658 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.658 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.658 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.916 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.916 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.916 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.916 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.916 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.174 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:46.174 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:47.108 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.108 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.108 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.108 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.108 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.108 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.108 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:47.108 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.366 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.624 00:15:47.624 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.624 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.624 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.883 { 00:15:47.883 "cntlid": 61, 00:15:47.883 "qid": 0, 00:15:47.883 "state": "enabled", 00:15:47.883 "thread": "nvmf_tgt_poll_group_000", 00:15:47.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:47.883 "listen_address": { 00:15:47.883 "trtype": "TCP", 00:15:47.883 "adrfam": "IPv4", 00:15:47.883 "traddr": "10.0.0.2", 00:15:47.883 "trsvcid": "4420" 00:15:47.883 }, 00:15:47.883 "peer_address": { 00:15:47.883 "trtype": "TCP", 00:15:47.883 "adrfam": "IPv4", 00:15:47.883 "traddr": "10.0.0.1", 00:15:47.883 "trsvcid": "44860" 00:15:47.883 }, 00:15:47.883 "auth": { 00:15:47.883 "state": "completed", 00:15:47.883 "digest": "sha384", 00:15:47.883 "dhgroup": "ffdhe2048" 00:15:47.883 } 00:15:47.883 } 00:15:47.883 ]' 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.883 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.139 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.139 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.139 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.397 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:48.397 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:49.331 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.331 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.331 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.331 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.331 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.331 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.331 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:49.331 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:49.588 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:49.588 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.588 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.588 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:49.588 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:49.588 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.589 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:49.589 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.589 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.589 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.589 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:49.589 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.589 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.846 00:15:49.846 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.846 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.846 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.104 { 00:15:50.104 "cntlid": 63, 00:15:50.104 "qid": 0, 00:15:50.104 "state": "enabled", 00:15:50.104 "thread": "nvmf_tgt_poll_group_000", 00:15:50.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:50.104 "listen_address": { 00:15:50.104 "trtype": "TCP", 00:15:50.104 "adrfam": "IPv4", 00:15:50.104 "traddr": "10.0.0.2", 00:15:50.104 "trsvcid": "4420" 00:15:50.104 }, 00:15:50.104 "peer_address": { 00:15:50.104 "trtype": "TCP", 00:15:50.104 "adrfam": "IPv4", 00:15:50.104 "traddr": "10.0.0.1", 00:15:50.104 "trsvcid": "59400" 00:15:50.104 }, 00:15:50.104 "auth": { 00:15:50.104 "state": "completed", 00:15:50.104 "digest": "sha384", 00:15:50.104 "dhgroup": "ffdhe2048" 00:15:50.104 } 00:15:50.104 } 00:15:50.104 ]' 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.104 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.363 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.363 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.363 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.620 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:50.620 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:51.608 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.608 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:51.608 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.608 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.608 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.608 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.608 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.608 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.608 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.890 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.148 00:15:52.148 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.148 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.148 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.405 { 00:15:52.405 "cntlid": 65, 00:15:52.405 "qid": 0, 00:15:52.405 "state": "enabled", 00:15:52.405 "thread": "nvmf_tgt_poll_group_000", 00:15:52.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:52.405 "listen_address": { 00:15:52.405 "trtype": "TCP", 00:15:52.405 "adrfam": "IPv4", 00:15:52.405 "traddr": "10.0.0.2", 00:15:52.405 "trsvcid": "4420" 00:15:52.405 }, 00:15:52.405 "peer_address": { 00:15:52.405 "trtype": "TCP", 00:15:52.405 "adrfam": "IPv4", 00:15:52.405 "traddr": "10.0.0.1", 00:15:52.405 "trsvcid": "59438" 00:15:52.405 }, 00:15:52.405 "auth": { 00:15:52.405 "state": "completed", 00:15:52.405 "digest": "sha384", 00:15:52.405 "dhgroup": "ffdhe3072" 00:15:52.405 } 00:15:52.405 } 00:15:52.405 ]' 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.405 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.662 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:52.662 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:15:53.595 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.596 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.596 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.596 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.596 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.596 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.596 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.596 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.854 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.419 00:15:54.419 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.419 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.419 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.678 { 00:15:54.678 "cntlid": 67, 00:15:54.678 "qid": 0, 00:15:54.678 "state": "enabled", 00:15:54.678 "thread": "nvmf_tgt_poll_group_000", 00:15:54.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:54.678 "listen_address": { 00:15:54.678 "trtype": "TCP", 00:15:54.678 "adrfam": "IPv4", 00:15:54.678 "traddr": "10.0.0.2", 00:15:54.678 "trsvcid": "4420" 00:15:54.678 }, 00:15:54.678 "peer_address": { 00:15:54.678 "trtype": "TCP", 00:15:54.678 "adrfam": "IPv4", 00:15:54.678 "traddr": "10.0.0.1", 00:15:54.678 "trsvcid": "59464" 00:15:54.678 }, 00:15:54.678 "auth": { 00:15:54.678 "state": "completed", 00:15:54.678 "digest": "sha384", 00:15:54.678 "dhgroup": "ffdhe3072" 00:15:54.678 } 00:15:54.678 } 00:15:54.678 ]' 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.678 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.937 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:54.937 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:15:55.868 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.868 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:55.868 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.868 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.868 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.868 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.868 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.868 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.126 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.383 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.384 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.384 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.642 00:15:56.642 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.642 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.642 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.900 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.900 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.900 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.900 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.900 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.900 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.900 { 00:15:56.900 "cntlid": 69, 00:15:56.900 "qid": 0, 00:15:56.900 "state": "enabled", 00:15:56.900 "thread": "nvmf_tgt_poll_group_000", 00:15:56.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:56.900 "listen_address": { 00:15:56.900 "trtype": "TCP", 00:15:56.900 "adrfam": "IPv4", 00:15:56.900 "traddr": "10.0.0.2", 00:15:56.900 "trsvcid": "4420" 00:15:56.900 }, 00:15:56.900 "peer_address": { 00:15:56.900 "trtype": "TCP", 00:15:56.900 "adrfam": "IPv4", 00:15:56.900 "traddr": "10.0.0.1", 00:15:56.900 "trsvcid": "59488" 00:15:56.900 }, 00:15:56.900 "auth": { 00:15:56.900 "state": "completed", 00:15:56.900 "digest": "sha384", 00:15:56.900 "dhgroup": "ffdhe3072" 00:15:56.900 } 00:15:56.900 } 00:15:56.900 ]' 00:15:56.900 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.900 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.900 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.157 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.157 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.157 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.157 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.157 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.414 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:57.414 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:15:58.344 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.344 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:58.344 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.344 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.344 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.344 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.344 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.344 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.600 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.601 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.859 00:15:58.859 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.859 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.859 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.116 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.116 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.116 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.116 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.116 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.116 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.116 { 00:15:59.116 "cntlid": 71, 00:15:59.116 "qid": 0, 00:15:59.116 "state": "enabled", 00:15:59.116 "thread": "nvmf_tgt_poll_group_000", 00:15:59.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:59.116 "listen_address": { 00:15:59.116 "trtype": "TCP", 00:15:59.116 "adrfam": "IPv4", 00:15:59.116 "traddr": "10.0.0.2", 00:15:59.116 "trsvcid": "4420" 00:15:59.116 }, 00:15:59.116 "peer_address": { 00:15:59.116 "trtype": "TCP", 00:15:59.116 "adrfam": "IPv4", 00:15:59.116 "traddr": "10.0.0.1", 00:15:59.116 "trsvcid": "59512" 00:15:59.116 }, 00:15:59.116 "auth": { 00:15:59.116 "state": "completed", 00:15:59.116 "digest": "sha384", 00:15:59.116 "dhgroup": "ffdhe3072" 00:15:59.116 } 00:15:59.116 } 00:15:59.116 ]' 00:15:59.372 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.372 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.372 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.372 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.372 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.372 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.372 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.372 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.629 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:15:59.629 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:00.563 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.563 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:00.563 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.563 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.563 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.563 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.563 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.563 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.563 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.820 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.384 00:16:01.384 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.384 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.384 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.641 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.641 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.641 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.642 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.642 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.642 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.642 { 00:16:01.642 "cntlid": 73, 00:16:01.642 "qid": 0, 00:16:01.642 "state": "enabled", 00:16:01.642 "thread": "nvmf_tgt_poll_group_000", 00:16:01.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:01.642 "listen_address": { 00:16:01.642 "trtype": "TCP", 00:16:01.642 "adrfam": "IPv4", 00:16:01.642 "traddr": "10.0.0.2", 00:16:01.642 "trsvcid": "4420" 00:16:01.642 }, 00:16:01.642 "peer_address": { 00:16:01.642 "trtype": "TCP", 00:16:01.642 "adrfam": "IPv4", 00:16:01.642 "traddr": "10.0.0.1", 00:16:01.642 "trsvcid": "54424" 00:16:01.642 }, 00:16:01.642 "auth": { 00:16:01.642 "state": "completed", 00:16:01.642 "digest": "sha384", 00:16:01.642 "dhgroup": "ffdhe4096" 00:16:01.642 } 00:16:01.642 } 00:16:01.642 ]' 00:16:01.642 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.642 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.642 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.642 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.642 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.902 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.902 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.902 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.158 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:02.158 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:03.088 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.088 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:03.088 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.088 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.088 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.088 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.088 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.088 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.345 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.909 00:16:03.909 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.909 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.909 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.909 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.910 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.910 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.910 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.910 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.910 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.910 { 00:16:03.910 "cntlid": 75, 00:16:03.910 "qid": 0, 00:16:03.910 "state": "enabled", 00:16:03.910 "thread": "nvmf_tgt_poll_group_000", 00:16:03.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:03.910 "listen_address": { 00:16:03.910 "trtype": "TCP", 00:16:03.910 "adrfam": "IPv4", 00:16:03.910 "traddr": "10.0.0.2", 00:16:03.910 "trsvcid": "4420" 00:16:03.910 }, 00:16:03.910 "peer_address": { 00:16:03.910 "trtype": "TCP", 00:16:03.910 "adrfam": "IPv4", 00:16:03.910 "traddr": "10.0.0.1", 00:16:03.910 "trsvcid": "54440" 00:16:03.910 }, 00:16:03.910 "auth": { 00:16:03.910 "state": "completed", 00:16:03.910 "digest": "sha384", 00:16:03.910 "dhgroup": "ffdhe4096" 00:16:03.910 } 00:16:03.910 } 00:16:03.910 ]' 00:16:03.910 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.167 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.167 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.167 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.167 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.167 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.167 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.167 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.424 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:04.424 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:05.357 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.357 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:05.357 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.357 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.357 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.357 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.357 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.357 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.614 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.872 00:16:06.129 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.129 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.129 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.387 { 00:16:06.387 "cntlid": 77, 00:16:06.387 "qid": 0, 00:16:06.387 "state": "enabled", 00:16:06.387 "thread": "nvmf_tgt_poll_group_000", 00:16:06.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:06.387 "listen_address": { 00:16:06.387 "trtype": "TCP", 00:16:06.387 "adrfam": "IPv4", 00:16:06.387 "traddr": "10.0.0.2", 00:16:06.387 "trsvcid": "4420" 00:16:06.387 }, 00:16:06.387 "peer_address": { 00:16:06.387 "trtype": "TCP", 00:16:06.387 "adrfam": "IPv4", 00:16:06.387 "traddr": "10.0.0.1", 00:16:06.387 "trsvcid": "54480" 00:16:06.387 }, 00:16:06.387 "auth": { 00:16:06.387 "state": "completed", 00:16:06.387 "digest": "sha384", 00:16:06.387 "dhgroup": "ffdhe4096" 00:16:06.387 } 00:16:06.387 } 00:16:06.387 ]' 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.387 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.645 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:06.645 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:07.577 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.577 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.577 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.577 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.577 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.577 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.577 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.577 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.835 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.400 00:16:08.401 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.401 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.401 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.401 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.401 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.401 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.401 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.658 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.658 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.658 { 00:16:08.658 "cntlid": 79, 00:16:08.658 "qid": 0, 00:16:08.658 "state": "enabled", 00:16:08.658 "thread": "nvmf_tgt_poll_group_000", 00:16:08.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:08.658 "listen_address": { 00:16:08.658 "trtype": "TCP", 00:16:08.658 "adrfam": "IPv4", 00:16:08.658 "traddr": "10.0.0.2", 00:16:08.658 "trsvcid": "4420" 00:16:08.658 }, 00:16:08.658 "peer_address": { 00:16:08.658 "trtype": "TCP", 00:16:08.658 "adrfam": "IPv4", 00:16:08.658 "traddr": "10.0.0.1", 00:16:08.658 "trsvcid": "54516" 00:16:08.658 }, 00:16:08.658 "auth": { 00:16:08.658 "state": "completed", 00:16:08.658 "digest": "sha384", 00:16:08.658 "dhgroup": "ffdhe4096" 00:16:08.658 } 00:16:08.658 } 00:16:08.658 ]' 00:16:08.658 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.658 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.658 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.658 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.658 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.658 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.658 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.659 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.916 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:08.916 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:09.848 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.848 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:09.848 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.848 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.848 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.848 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.848 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.848 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.848 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.106 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.672 00:16:10.672 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.672 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.672 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.929 { 00:16:10.929 "cntlid": 81, 00:16:10.929 "qid": 0, 00:16:10.929 "state": "enabled", 00:16:10.929 "thread": "nvmf_tgt_poll_group_000", 00:16:10.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:10.929 "listen_address": { 00:16:10.929 "trtype": "TCP", 00:16:10.929 "adrfam": "IPv4", 00:16:10.929 "traddr": "10.0.0.2", 00:16:10.929 "trsvcid": "4420" 00:16:10.929 }, 00:16:10.929 "peer_address": { 00:16:10.929 "trtype": "TCP", 00:16:10.929 "adrfam": "IPv4", 00:16:10.929 "traddr": "10.0.0.1", 00:16:10.929 "trsvcid": "48460" 00:16:10.929 }, 00:16:10.929 "auth": { 00:16:10.929 "state": "completed", 00:16:10.929 "digest": "sha384", 00:16:10.929 "dhgroup": "ffdhe6144" 00:16:10.929 } 00:16:10.929 } 00:16:10.929 ]' 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.929 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.494 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:11.494 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:12.427 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.427 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.427 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.427 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.428 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.361 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.361 { 00:16:13.361 "cntlid": 83, 00:16:13.361 "qid": 0, 00:16:13.361 "state": "enabled", 00:16:13.361 "thread": "nvmf_tgt_poll_group_000", 00:16:13.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:13.361 "listen_address": { 00:16:13.361 "trtype": "TCP", 00:16:13.361 "adrfam": "IPv4", 00:16:13.361 "traddr": "10.0.0.2", 00:16:13.361 "trsvcid": "4420" 00:16:13.361 }, 00:16:13.361 "peer_address": { 00:16:13.361 "trtype": "TCP", 00:16:13.361 "adrfam": "IPv4", 00:16:13.361 "traddr": "10.0.0.1", 00:16:13.361 "trsvcid": "48492" 00:16:13.361 }, 00:16:13.361 "auth": { 00:16:13.361 "state": "completed", 00:16:13.361 "digest": "sha384", 00:16:13.361 "dhgroup": "ffdhe6144" 00:16:13.361 } 00:16:13.361 } 00:16:13.361 ]' 00:16:13.361 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.619 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.619 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.619 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.619 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.619 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.619 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.619 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.876 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:13.876 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:14.806 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.806 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.806 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.806 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.806 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.806 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.806 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.806 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.072 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.637 00:16:15.637 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.637 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.637 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.894 { 00:16:15.894 "cntlid": 85, 00:16:15.894 "qid": 0, 00:16:15.894 "state": "enabled", 00:16:15.894 "thread": "nvmf_tgt_poll_group_000", 00:16:15.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:15.894 "listen_address": { 00:16:15.894 "trtype": "TCP", 00:16:15.894 "adrfam": "IPv4", 00:16:15.894 "traddr": "10.0.0.2", 00:16:15.894 "trsvcid": "4420" 00:16:15.894 }, 00:16:15.894 "peer_address": { 00:16:15.894 "trtype": "TCP", 00:16:15.894 "adrfam": "IPv4", 00:16:15.894 "traddr": "10.0.0.1", 00:16:15.894 "trsvcid": "48534" 00:16:15.894 }, 00:16:15.894 "auth": { 00:16:15.894 "state": "completed", 00:16:15.894 "digest": "sha384", 00:16:15.894 "dhgroup": "ffdhe6144" 00:16:15.894 } 00:16:15.894 } 00:16:15.894 ]' 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.894 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.557 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:16.557 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:17.156 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.414 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.414 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.414 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.414 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.414 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.414 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.414 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.673 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.239 00:16:18.239 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.239 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.239 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.497 { 00:16:18.497 "cntlid": 87, 00:16:18.497 "qid": 0, 00:16:18.497 "state": "enabled", 00:16:18.497 "thread": "nvmf_tgt_poll_group_000", 00:16:18.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:18.497 "listen_address": { 00:16:18.497 "trtype": "TCP", 00:16:18.497 "adrfam": "IPv4", 00:16:18.497 "traddr": "10.0.0.2", 00:16:18.497 "trsvcid": "4420" 00:16:18.497 }, 00:16:18.497 "peer_address": { 00:16:18.497 "trtype": "TCP", 00:16:18.497 "adrfam": "IPv4", 00:16:18.497 "traddr": "10.0.0.1", 00:16:18.497 "trsvcid": "48564" 00:16:18.497 }, 00:16:18.497 "auth": { 00:16:18.497 "state": "completed", 00:16:18.497 "digest": "sha384", 00:16:18.497 "dhgroup": "ffdhe6144" 00:16:18.497 } 00:16:18.497 } 00:16:18.497 ]' 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.497 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.755 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:18.755 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:19.688 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.688 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:19.688 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.688 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.688 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.688 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.688 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.688 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.688 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.946 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.880 00:16:20.880 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.880 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.880 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.138 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.138 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.138 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.138 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.138 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.138 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.138 { 00:16:21.138 "cntlid": 89, 00:16:21.138 "qid": 0, 00:16:21.138 "state": "enabled", 00:16:21.138 "thread": "nvmf_tgt_poll_group_000", 00:16:21.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:21.138 "listen_address": { 00:16:21.138 "trtype": "TCP", 00:16:21.138 "adrfam": "IPv4", 00:16:21.138 "traddr": "10.0.0.2", 00:16:21.138 "trsvcid": "4420" 00:16:21.138 }, 00:16:21.139 "peer_address": { 00:16:21.139 "trtype": "TCP", 00:16:21.139 "adrfam": "IPv4", 00:16:21.139 "traddr": "10.0.0.1", 00:16:21.139 "trsvcid": "57554" 00:16:21.139 }, 00:16:21.139 "auth": { 00:16:21.139 "state": "completed", 00:16:21.139 "digest": "sha384", 00:16:21.139 "dhgroup": "ffdhe8192" 00:16:21.139 } 00:16:21.139 } 00:16:21.139 ]' 00:16:21.139 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.139 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.139 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.139 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.139 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.397 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.397 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.397 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.655 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:21.655 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:22.589 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.589 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:22.589 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.589 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.589 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.589 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.589 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.589 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.847 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.781 00:16:23.781 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.781 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.781 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.039 { 00:16:24.039 "cntlid": 91, 00:16:24.039 "qid": 0, 00:16:24.039 "state": "enabled", 00:16:24.039 "thread": "nvmf_tgt_poll_group_000", 00:16:24.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:24.039 "listen_address": { 00:16:24.039 "trtype": "TCP", 00:16:24.039 "adrfam": "IPv4", 00:16:24.039 "traddr": "10.0.0.2", 00:16:24.039 "trsvcid": "4420" 00:16:24.039 }, 00:16:24.039 "peer_address": { 00:16:24.039 "trtype": "TCP", 00:16:24.039 "adrfam": "IPv4", 00:16:24.039 "traddr": "10.0.0.1", 00:16:24.039 "trsvcid": "57578" 00:16:24.039 }, 00:16:24.039 "auth": { 00:16:24.039 "state": "completed", 00:16:24.039 "digest": "sha384", 00:16:24.039 "dhgroup": "ffdhe8192" 00:16:24.039 } 00:16:24.039 } 00:16:24.039 ]' 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.039 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.298 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:24.298 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:25.231 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.231 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.231 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.231 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.231 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.231 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.231 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.231 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.489 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.424 00:16:26.424 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.424 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.424 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.682 { 00:16:26.682 "cntlid": 93, 00:16:26.682 "qid": 0, 00:16:26.682 "state": "enabled", 00:16:26.682 "thread": "nvmf_tgt_poll_group_000", 00:16:26.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:26.682 "listen_address": { 00:16:26.682 "trtype": "TCP", 00:16:26.682 "adrfam": "IPv4", 00:16:26.682 "traddr": "10.0.0.2", 00:16:26.682 "trsvcid": "4420" 00:16:26.682 }, 00:16:26.682 "peer_address": { 00:16:26.682 "trtype": "TCP", 00:16:26.682 "adrfam": "IPv4", 00:16:26.682 "traddr": "10.0.0.1", 00:16:26.682 "trsvcid": "57614" 00:16:26.682 }, 00:16:26.682 "auth": { 00:16:26.682 "state": "completed", 00:16:26.682 "digest": "sha384", 00:16:26.682 "dhgroup": "ffdhe8192" 00:16:26.682 } 00:16:26.682 } 00:16:26.682 ]' 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.682 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.683 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.249 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:27.249 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:28.184 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.184 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:28.184 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.184 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.184 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.184 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.184 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:28.184 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.442 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.376 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.376 { 00:16:29.376 "cntlid": 95, 00:16:29.376 "qid": 0, 00:16:29.376 "state": "enabled", 00:16:29.376 "thread": "nvmf_tgt_poll_group_000", 00:16:29.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:29.376 "listen_address": { 00:16:29.376 "trtype": "TCP", 00:16:29.376 "adrfam": "IPv4", 00:16:29.376 "traddr": "10.0.0.2", 00:16:29.376 "trsvcid": "4420" 00:16:29.376 }, 00:16:29.376 "peer_address": { 00:16:29.376 "trtype": "TCP", 00:16:29.376 "adrfam": "IPv4", 00:16:29.376 "traddr": "10.0.0.1", 00:16:29.376 "trsvcid": "57644" 00:16:29.376 }, 00:16:29.376 "auth": { 00:16:29.376 "state": "completed", 00:16:29.376 "digest": "sha384", 00:16:29.376 "dhgroup": "ffdhe8192" 00:16:29.376 } 00:16:29.376 } 00:16:29.376 ]' 00:16:29.376 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.634 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.634 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.634 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:29.634 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.634 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.634 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.634 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.893 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:29.893 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.826 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.083 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.650 00:16:31.650 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.650 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.650 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.909 { 00:16:31.909 "cntlid": 97, 00:16:31.909 "qid": 0, 00:16:31.909 "state": "enabled", 00:16:31.909 "thread": "nvmf_tgt_poll_group_000", 00:16:31.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:31.909 "listen_address": { 00:16:31.909 "trtype": "TCP", 00:16:31.909 "adrfam": "IPv4", 00:16:31.909 "traddr": "10.0.0.2", 00:16:31.909 "trsvcid": "4420" 00:16:31.909 }, 00:16:31.909 "peer_address": { 00:16:31.909 "trtype": "TCP", 00:16:31.909 "adrfam": "IPv4", 00:16:31.909 "traddr": "10.0.0.1", 00:16:31.909 "trsvcid": "47734" 00:16:31.909 }, 00:16:31.909 "auth": { 00:16:31.909 "state": "completed", 00:16:31.909 "digest": "sha512", 00:16:31.909 "dhgroup": "null" 00:16:31.909 } 00:16:31.909 } 00:16:31.909 ]' 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.909 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.167 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:32.167 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:33.098 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.098 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:33.098 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.098 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.098 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.098 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.098 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.098 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.356 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.613 00:16:33.870 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.870 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.870 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.128 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.128 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.128 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.128 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.128 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.128 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.128 { 00:16:34.128 "cntlid": 99, 00:16:34.128 "qid": 0, 00:16:34.128 "state": "enabled", 00:16:34.128 "thread": "nvmf_tgt_poll_group_000", 00:16:34.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:34.128 "listen_address": { 00:16:34.128 "trtype": "TCP", 00:16:34.128 "adrfam": "IPv4", 00:16:34.128 "traddr": "10.0.0.2", 00:16:34.128 "trsvcid": "4420" 00:16:34.128 }, 00:16:34.128 "peer_address": { 00:16:34.129 "trtype": "TCP", 00:16:34.129 "adrfam": "IPv4", 00:16:34.129 "traddr": "10.0.0.1", 00:16:34.129 "trsvcid": "47756" 00:16:34.129 }, 00:16:34.129 "auth": { 00:16:34.129 "state": "completed", 00:16:34.129 "digest": "sha512", 00:16:34.129 "dhgroup": "null" 00:16:34.129 } 00:16:34.129 } 00:16:34.129 ]' 00:16:34.129 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.129 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.129 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.129 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.129 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.129 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.129 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.129 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.386 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:34.386 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:35.319 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.319 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.319 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.319 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.319 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.319 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.319 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:35.319 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:35.576 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:35.576 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.576 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.576 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:35.576 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.576 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.576 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.576 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.577 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.577 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.577 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.577 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.577 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.834 00:16:35.834 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.834 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.834 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.399 { 00:16:36.399 "cntlid": 101, 00:16:36.399 "qid": 0, 00:16:36.399 "state": "enabled", 00:16:36.399 "thread": "nvmf_tgt_poll_group_000", 00:16:36.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:36.399 "listen_address": { 00:16:36.399 "trtype": "TCP", 00:16:36.399 "adrfam": "IPv4", 00:16:36.399 "traddr": "10.0.0.2", 00:16:36.399 "trsvcid": "4420" 00:16:36.399 }, 00:16:36.399 "peer_address": { 00:16:36.399 "trtype": "TCP", 00:16:36.399 "adrfam": "IPv4", 00:16:36.399 "traddr": "10.0.0.1", 00:16:36.399 "trsvcid": "47780" 00:16:36.399 }, 00:16:36.399 "auth": { 00:16:36.399 "state": "completed", 00:16:36.399 "digest": "sha512", 00:16:36.399 "dhgroup": "null" 00:16:36.399 } 00:16:36.399 } 00:16:36.399 ]' 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.399 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.657 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:36.657 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:37.591 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.591 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.591 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.591 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.591 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.591 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.591 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:37.591 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.849 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.178 00:16:38.178 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.178 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.178 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.480 { 00:16:38.480 "cntlid": 103, 00:16:38.480 "qid": 0, 00:16:38.480 "state": "enabled", 00:16:38.480 "thread": "nvmf_tgt_poll_group_000", 00:16:38.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:38.480 "listen_address": { 00:16:38.480 "trtype": "TCP", 00:16:38.480 "adrfam": "IPv4", 00:16:38.480 "traddr": "10.0.0.2", 00:16:38.480 "trsvcid": "4420" 00:16:38.480 }, 00:16:38.480 "peer_address": { 00:16:38.480 "trtype": "TCP", 00:16:38.480 "adrfam": "IPv4", 00:16:38.480 "traddr": "10.0.0.1", 00:16:38.480 "trsvcid": "47806" 00:16:38.480 }, 00:16:38.480 "auth": { 00:16:38.480 "state": "completed", 00:16:38.480 "digest": "sha512", 00:16:38.480 "dhgroup": "null" 00:16:38.480 } 00:16:38.480 } 00:16:38.480 ]' 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.480 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.738 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:38.738 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:39.671 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.672 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:39.672 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.672 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.672 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.672 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.672 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.672 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.672 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.930 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:39.930 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.930 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.930 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.930 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.930 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.930 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.930 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.930 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.187 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.187 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.187 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.187 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.445 00:16:40.445 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.445 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.445 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.702 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.702 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.702 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.702 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.702 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.702 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.702 { 00:16:40.702 "cntlid": 105, 00:16:40.702 "qid": 0, 00:16:40.702 "state": "enabled", 00:16:40.702 "thread": "nvmf_tgt_poll_group_000", 00:16:40.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:40.702 "listen_address": { 00:16:40.702 "trtype": "TCP", 00:16:40.702 "adrfam": "IPv4", 00:16:40.702 "traddr": "10.0.0.2", 00:16:40.702 "trsvcid": "4420" 00:16:40.702 }, 00:16:40.702 "peer_address": { 00:16:40.702 "trtype": "TCP", 00:16:40.702 "adrfam": "IPv4", 00:16:40.702 "traddr": "10.0.0.1", 00:16:40.702 "trsvcid": "48054" 00:16:40.702 }, 00:16:40.702 "auth": { 00:16:40.702 "state": "completed", 00:16:40.702 "digest": "sha512", 00:16:40.702 "dhgroup": "ffdhe2048" 00:16:40.702 } 00:16:40.702 } 00:16:40.702 ]' 00:16:40.702 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.959 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.959 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.959 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.959 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.960 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.960 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.960 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.217 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:41.218 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:42.148 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.148 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.148 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.148 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.148 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.148 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.148 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.148 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.407 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.664 00:16:42.664 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.664 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.664 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.230 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.230 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.230 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.230 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.230 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.230 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.230 { 00:16:43.230 "cntlid": 107, 00:16:43.230 "qid": 0, 00:16:43.230 "state": "enabled", 00:16:43.230 "thread": "nvmf_tgt_poll_group_000", 00:16:43.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:43.230 "listen_address": { 00:16:43.230 "trtype": "TCP", 00:16:43.230 "adrfam": "IPv4", 00:16:43.230 "traddr": "10.0.0.2", 00:16:43.230 "trsvcid": "4420" 00:16:43.230 }, 00:16:43.230 "peer_address": { 00:16:43.230 "trtype": "TCP", 00:16:43.230 "adrfam": "IPv4", 00:16:43.230 "traddr": "10.0.0.1", 00:16:43.230 "trsvcid": "48064" 00:16:43.230 }, 00:16:43.230 "auth": { 00:16:43.230 "state": "completed", 00:16:43.230 "digest": "sha512", 00:16:43.230 "dhgroup": "ffdhe2048" 00:16:43.230 } 00:16:43.230 } 00:16:43.230 ]' 00:16:43.230 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.230 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.230 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.230 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.230 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.230 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.230 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.230 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.488 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:43.488 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:44.423 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.423 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:44.423 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.423 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.423 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.423 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.423 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.423 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.681 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.258 00:16:45.258 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.258 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.258 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.519 { 00:16:45.519 "cntlid": 109, 00:16:45.519 "qid": 0, 00:16:45.519 "state": "enabled", 00:16:45.519 "thread": "nvmf_tgt_poll_group_000", 00:16:45.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:45.519 "listen_address": { 00:16:45.519 "trtype": "TCP", 00:16:45.519 "adrfam": "IPv4", 00:16:45.519 "traddr": "10.0.0.2", 00:16:45.519 "trsvcid": "4420" 00:16:45.519 }, 00:16:45.519 "peer_address": { 00:16:45.519 "trtype": "TCP", 00:16:45.519 "adrfam": "IPv4", 00:16:45.519 "traddr": "10.0.0.1", 00:16:45.519 "trsvcid": "48082" 00:16:45.519 }, 00:16:45.519 "auth": { 00:16:45.519 "state": "completed", 00:16:45.519 "digest": "sha512", 00:16:45.519 "dhgroup": "ffdhe2048" 00:16:45.519 } 00:16:45.519 } 00:16:45.519 ]' 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.519 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.777 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:45.777 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:46.713 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.713 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.713 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.713 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.713 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.713 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.713 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:46.713 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.971 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.229 00:16:47.229 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.229 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.229 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.486 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.486 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.486 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.486 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.744 { 00:16:47.744 "cntlid": 111, 00:16:47.744 "qid": 0, 00:16:47.744 "state": "enabled", 00:16:47.744 "thread": "nvmf_tgt_poll_group_000", 00:16:47.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:47.744 "listen_address": { 00:16:47.744 "trtype": "TCP", 00:16:47.744 "adrfam": "IPv4", 00:16:47.744 "traddr": "10.0.0.2", 00:16:47.744 "trsvcid": "4420" 00:16:47.744 }, 00:16:47.744 "peer_address": { 00:16:47.744 "trtype": "TCP", 00:16:47.744 "adrfam": "IPv4", 00:16:47.744 "traddr": "10.0.0.1", 00:16:47.744 "trsvcid": "48106" 00:16:47.744 }, 00:16:47.744 "auth": { 00:16:47.744 "state": "completed", 00:16:47.744 "digest": "sha512", 00:16:47.744 "dhgroup": "ffdhe2048" 00:16:47.744 } 00:16:47.744 } 00:16:47.744 ]' 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.744 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.002 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:48.002 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:48.938 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.938 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.938 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.938 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.938 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.938 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.938 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.938 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:48.938 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.196 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.763 00:16:49.763 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.763 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.763 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.763 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.763 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.763 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.763 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.022 { 00:16:50.022 "cntlid": 113, 00:16:50.022 "qid": 0, 00:16:50.022 "state": "enabled", 00:16:50.022 "thread": "nvmf_tgt_poll_group_000", 00:16:50.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:50.022 "listen_address": { 00:16:50.022 "trtype": "TCP", 00:16:50.022 "adrfam": "IPv4", 00:16:50.022 "traddr": "10.0.0.2", 00:16:50.022 "trsvcid": "4420" 00:16:50.022 }, 00:16:50.022 "peer_address": { 00:16:50.022 "trtype": "TCP", 00:16:50.022 "adrfam": "IPv4", 00:16:50.022 "traddr": "10.0.0.1", 00:16:50.022 "trsvcid": "57258" 00:16:50.022 }, 00:16:50.022 "auth": { 00:16:50.022 "state": "completed", 00:16:50.022 "digest": "sha512", 00:16:50.022 "dhgroup": "ffdhe3072" 00:16:50.022 } 00:16:50.022 } 00:16:50.022 ]' 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.022 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.280 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:50.280 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:51.215 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.215 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:51.215 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.215 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.215 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.215 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.215 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.215 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.473 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.039 00:16:52.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.039 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.039 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.039 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.039 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.039 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.039 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.039 { 00:16:52.039 "cntlid": 115, 00:16:52.039 "qid": 0, 00:16:52.039 "state": "enabled", 00:16:52.039 "thread": "nvmf_tgt_poll_group_000", 00:16:52.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:52.039 "listen_address": { 00:16:52.039 "trtype": "TCP", 00:16:52.039 "adrfam": "IPv4", 00:16:52.039 "traddr": "10.0.0.2", 00:16:52.039 "trsvcid": "4420" 00:16:52.040 }, 00:16:52.040 "peer_address": { 00:16:52.040 "trtype": "TCP", 00:16:52.040 "adrfam": "IPv4", 00:16:52.040 "traddr": "10.0.0.1", 00:16:52.040 "trsvcid": "57280" 00:16:52.040 }, 00:16:52.040 "auth": { 00:16:52.040 "state": "completed", 00:16:52.040 "digest": "sha512", 00:16:52.040 "dhgroup": "ffdhe3072" 00:16:52.040 } 00:16:52.040 } 00:16:52.040 ]' 00:16:52.040 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.298 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.298 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.298 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.298 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.298 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.298 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.298 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.556 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:52.556 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:16:53.489 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.489 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:53.489 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.489 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.489 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.489 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.489 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.489 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.747 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:53.747 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.747 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.747 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:53.747 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.747 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.747 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.747 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.747 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.748 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.748 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.748 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.748 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.006 00:16:54.006 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.006 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.006 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.573 { 00:16:54.573 "cntlid": 117, 00:16:54.573 "qid": 0, 00:16:54.573 "state": "enabled", 00:16:54.573 "thread": "nvmf_tgt_poll_group_000", 00:16:54.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:54.573 "listen_address": { 00:16:54.573 "trtype": "TCP", 00:16:54.573 "adrfam": "IPv4", 00:16:54.573 "traddr": "10.0.0.2", 00:16:54.573 "trsvcid": "4420" 00:16:54.573 }, 00:16:54.573 "peer_address": { 00:16:54.573 "trtype": "TCP", 00:16:54.573 "adrfam": "IPv4", 00:16:54.573 "traddr": "10.0.0.1", 00:16:54.573 "trsvcid": "57310" 00:16:54.573 }, 00:16:54.573 "auth": { 00:16:54.573 "state": "completed", 00:16:54.573 "digest": "sha512", 00:16:54.573 "dhgroup": "ffdhe3072" 00:16:54.573 } 00:16:54.573 } 00:16:54.573 ]' 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.573 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.831 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:54.831 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:16:55.765 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.765 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:55.765 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.765 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.765 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.765 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.765 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:55.765 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.024 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.283 00:16:56.283 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.283 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.283 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.849 { 00:16:56.849 "cntlid": 119, 00:16:56.849 "qid": 0, 00:16:56.849 "state": "enabled", 00:16:56.849 "thread": "nvmf_tgt_poll_group_000", 00:16:56.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:56.849 "listen_address": { 00:16:56.849 "trtype": "TCP", 00:16:56.849 "adrfam": "IPv4", 00:16:56.849 "traddr": "10.0.0.2", 00:16:56.849 "trsvcid": "4420" 00:16:56.849 }, 00:16:56.849 "peer_address": { 00:16:56.849 "trtype": "TCP", 00:16:56.849 "adrfam": "IPv4", 00:16:56.849 "traddr": "10.0.0.1", 00:16:56.849 "trsvcid": "57340" 00:16:56.849 }, 00:16:56.849 "auth": { 00:16:56.849 "state": "completed", 00:16:56.849 "digest": "sha512", 00:16:56.849 "dhgroup": "ffdhe3072" 00:16:56.849 } 00:16:56.849 } 00:16:56.849 ]' 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.849 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.107 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:57.107 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:16:58.042 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.042 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:58.042 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.042 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.042 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.042 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.042 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.042 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.042 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.300 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.559 00:16:58.559 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.559 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.559 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.817 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.817 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.817 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.817 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.075 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.075 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.075 { 00:16:59.075 "cntlid": 121, 00:16:59.075 "qid": 0, 00:16:59.075 "state": "enabled", 00:16:59.075 "thread": "nvmf_tgt_poll_group_000", 00:16:59.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:59.075 "listen_address": { 00:16:59.075 "trtype": "TCP", 00:16:59.076 "adrfam": "IPv4", 00:16:59.076 "traddr": "10.0.0.2", 00:16:59.076 "trsvcid": "4420" 00:16:59.076 }, 00:16:59.076 "peer_address": { 00:16:59.076 "trtype": "TCP", 00:16:59.076 "adrfam": "IPv4", 00:16:59.076 "traddr": "10.0.0.1", 00:16:59.076 "trsvcid": "57362" 00:16:59.076 }, 00:16:59.076 "auth": { 00:16:59.076 "state": "completed", 00:16:59.076 "digest": "sha512", 00:16:59.076 "dhgroup": "ffdhe4096" 00:16:59.076 } 00:16:59.076 } 00:16:59.076 ]' 00:16:59.076 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.076 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.076 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.076 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.076 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.076 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.076 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.076 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.334 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:16:59.334 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:17:00.268 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.268 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:00.268 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.268 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.268 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.268 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.268 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.268 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.526 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.092 00:17:01.092 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.092 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.092 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.350 { 00:17:01.350 "cntlid": 123, 00:17:01.350 "qid": 0, 00:17:01.350 "state": "enabled", 00:17:01.350 "thread": "nvmf_tgt_poll_group_000", 00:17:01.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:01.350 "listen_address": { 00:17:01.350 "trtype": "TCP", 00:17:01.350 "adrfam": "IPv4", 00:17:01.350 "traddr": "10.0.0.2", 00:17:01.350 "trsvcid": "4420" 00:17:01.350 }, 00:17:01.350 "peer_address": { 00:17:01.350 "trtype": "TCP", 00:17:01.350 "adrfam": "IPv4", 00:17:01.350 "traddr": "10.0.0.1", 00:17:01.350 "trsvcid": "47744" 00:17:01.350 }, 00:17:01.350 "auth": { 00:17:01.350 "state": "completed", 00:17:01.350 "digest": "sha512", 00:17:01.350 "dhgroup": "ffdhe4096" 00:17:01.350 } 00:17:01.350 } 00:17:01.350 ]' 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.350 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.608 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:17:01.608 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:17:02.541 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.541 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.541 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.541 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.541 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.541 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.541 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.541 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.798 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:02.798 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.057 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.315 00:17:03.315 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.315 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.315 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.572 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.572 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.572 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.572 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.572 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.572 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.572 { 00:17:03.572 "cntlid": 125, 00:17:03.572 "qid": 0, 00:17:03.572 "state": "enabled", 00:17:03.572 "thread": "nvmf_tgt_poll_group_000", 00:17:03.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:03.572 "listen_address": { 00:17:03.572 "trtype": "TCP", 00:17:03.572 "adrfam": "IPv4", 00:17:03.572 "traddr": "10.0.0.2", 00:17:03.572 "trsvcid": "4420" 00:17:03.572 }, 00:17:03.572 "peer_address": { 00:17:03.572 "trtype": "TCP", 00:17:03.572 "adrfam": "IPv4", 00:17:03.572 "traddr": "10.0.0.1", 00:17:03.572 "trsvcid": "47770" 00:17:03.572 }, 00:17:03.572 "auth": { 00:17:03.572 "state": "completed", 00:17:03.572 "digest": "sha512", 00:17:03.572 "dhgroup": "ffdhe4096" 00:17:03.572 } 00:17:03.572 } 00:17:03.572 ]' 00:17:03.572 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.830 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.830 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.830 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.830 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.830 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.830 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.830 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.089 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:17:04.089 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:17:05.022 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.022 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:05.022 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.022 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.022 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.022 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.022 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.022 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.280 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.538 00:17:05.538 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.538 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.538 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.795 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.795 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.795 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.795 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.795 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.795 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.795 { 00:17:05.795 "cntlid": 127, 00:17:05.795 "qid": 0, 00:17:05.795 "state": "enabled", 00:17:05.795 "thread": "nvmf_tgt_poll_group_000", 00:17:05.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:05.795 "listen_address": { 00:17:05.795 "trtype": "TCP", 00:17:05.795 "adrfam": "IPv4", 00:17:05.795 "traddr": "10.0.0.2", 00:17:05.795 "trsvcid": "4420" 00:17:05.795 }, 00:17:05.795 "peer_address": { 00:17:05.795 "trtype": "TCP", 00:17:05.795 "adrfam": "IPv4", 00:17:05.795 "traddr": "10.0.0.1", 00:17:05.795 "trsvcid": "47816" 00:17:05.795 }, 00:17:05.795 "auth": { 00:17:05.795 "state": "completed", 00:17:05.795 "digest": "sha512", 00:17:05.795 "dhgroup": "ffdhe4096" 00:17:05.795 } 00:17:05.795 } 00:17:05.795 ]' 00:17:05.795 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.053 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.053 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.053 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.053 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.053 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.053 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.053 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.311 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:06.311 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:07.247 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.247 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:07.247 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.247 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.247 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.247 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.247 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.247 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.247 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.506 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.073 00:17:08.073 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.073 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.073 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.332 { 00:17:08.332 "cntlid": 129, 00:17:08.332 "qid": 0, 00:17:08.332 "state": "enabled", 00:17:08.332 "thread": "nvmf_tgt_poll_group_000", 00:17:08.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:08.332 "listen_address": { 00:17:08.332 "trtype": "TCP", 00:17:08.332 "adrfam": "IPv4", 00:17:08.332 "traddr": "10.0.0.2", 00:17:08.332 "trsvcid": "4420" 00:17:08.332 }, 00:17:08.332 "peer_address": { 00:17:08.332 "trtype": "TCP", 00:17:08.332 "adrfam": "IPv4", 00:17:08.332 "traddr": "10.0.0.1", 00:17:08.332 "trsvcid": "47858" 00:17:08.332 }, 00:17:08.332 "auth": { 00:17:08.332 "state": "completed", 00:17:08.332 "digest": "sha512", 00:17:08.332 "dhgroup": "ffdhe6144" 00:17:08.332 } 00:17:08.332 } 00:17:08.332 ]' 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.332 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.590 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:17:08.590 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:17:09.524 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.524 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.524 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.524 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.524 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.524 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.524 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.524 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.782 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:09.782 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.782 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.782 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:09.783 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.783 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.783 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.783 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.783 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.783 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.783 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.783 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.783 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.349 00:17:10.349 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.349 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.349 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.614 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.614 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.614 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.614 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.614 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.614 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.614 { 00:17:10.614 "cntlid": 131, 00:17:10.614 "qid": 0, 00:17:10.614 "state": "enabled", 00:17:10.614 "thread": "nvmf_tgt_poll_group_000", 00:17:10.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:10.614 "listen_address": { 00:17:10.614 "trtype": "TCP", 00:17:10.614 "adrfam": "IPv4", 00:17:10.614 "traddr": "10.0.0.2", 00:17:10.614 "trsvcid": "4420" 00:17:10.614 }, 00:17:10.614 "peer_address": { 00:17:10.614 "trtype": "TCP", 00:17:10.614 "adrfam": "IPv4", 00:17:10.614 "traddr": "10.0.0.1", 00:17:10.614 "trsvcid": "57108" 00:17:10.614 }, 00:17:10.614 "auth": { 00:17:10.614 "state": "completed", 00:17:10.614 "digest": "sha512", 00:17:10.614 "dhgroup": "ffdhe6144" 00:17:10.614 } 00:17:10.614 } 00:17:10.614 ]' 00:17:10.614 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.614 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.614 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.872 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:10.872 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.872 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.872 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.872 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.130 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:17:11.130 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:17:12.066 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.067 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:12.067 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.067 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.067 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.067 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.067 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.067 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.325 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.891 00:17:12.891 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.891 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.891 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.149 { 00:17:13.149 "cntlid": 133, 00:17:13.149 "qid": 0, 00:17:13.149 "state": "enabled", 00:17:13.149 "thread": "nvmf_tgt_poll_group_000", 00:17:13.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:13.149 "listen_address": { 00:17:13.149 "trtype": "TCP", 00:17:13.149 "adrfam": "IPv4", 00:17:13.149 "traddr": "10.0.0.2", 00:17:13.149 "trsvcid": "4420" 00:17:13.149 }, 00:17:13.149 "peer_address": { 00:17:13.149 "trtype": "TCP", 00:17:13.149 "adrfam": "IPv4", 00:17:13.149 "traddr": "10.0.0.1", 00:17:13.149 "trsvcid": "57118" 00:17:13.149 }, 00:17:13.149 "auth": { 00:17:13.149 "state": "completed", 00:17:13.149 "digest": "sha512", 00:17:13.149 "dhgroup": "ffdhe6144" 00:17:13.149 } 00:17:13.149 } 00:17:13.149 ]' 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.149 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.407 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:17:13.407 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:17:14.340 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.340 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.340 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.340 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.340 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.340 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.340 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.340 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.599 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.165 00:17:15.165 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.165 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.165 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.422 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.422 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.422 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.422 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.422 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.422 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.422 { 00:17:15.422 "cntlid": 135, 00:17:15.422 "qid": 0, 00:17:15.422 "state": "enabled", 00:17:15.422 "thread": "nvmf_tgt_poll_group_000", 00:17:15.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:15.422 "listen_address": { 00:17:15.422 "trtype": "TCP", 00:17:15.422 "adrfam": "IPv4", 00:17:15.422 "traddr": "10.0.0.2", 00:17:15.422 "trsvcid": "4420" 00:17:15.422 }, 00:17:15.422 "peer_address": { 00:17:15.422 "trtype": "TCP", 00:17:15.422 "adrfam": "IPv4", 00:17:15.422 "traddr": "10.0.0.1", 00:17:15.422 "trsvcid": "57146" 00:17:15.422 }, 00:17:15.422 "auth": { 00:17:15.422 "state": "completed", 00:17:15.422 "digest": "sha512", 00:17:15.422 "dhgroup": "ffdhe6144" 00:17:15.422 } 00:17:15.422 } 00:17:15.422 ]' 00:17:15.422 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.680 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.680 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.680 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.680 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.680 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.680 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.680 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.938 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:15.938 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:16.873 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.873 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:16.873 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.873 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.873 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.873 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.873 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.873 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.873 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.131 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.071 00:17:18.071 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.071 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.071 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.368 { 00:17:18.368 "cntlid": 137, 00:17:18.368 "qid": 0, 00:17:18.368 "state": "enabled", 00:17:18.368 "thread": "nvmf_tgt_poll_group_000", 00:17:18.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:18.368 "listen_address": { 00:17:18.368 "trtype": "TCP", 00:17:18.368 "adrfam": "IPv4", 00:17:18.368 "traddr": "10.0.0.2", 00:17:18.368 "trsvcid": "4420" 00:17:18.368 }, 00:17:18.368 "peer_address": { 00:17:18.368 "trtype": "TCP", 00:17:18.368 "adrfam": "IPv4", 00:17:18.368 "traddr": "10.0.0.1", 00:17:18.368 "trsvcid": "57158" 00:17:18.368 }, 00:17:18.368 "auth": { 00:17:18.368 "state": "completed", 00:17:18.368 "digest": "sha512", 00:17:18.368 "dhgroup": "ffdhe8192" 00:17:18.368 } 00:17:18.368 } 00:17:18.368 ]' 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.368 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.661 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:17:18.661 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:17:19.594 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.594 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.594 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.594 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.594 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.594 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.594 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:19.594 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.852 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.785 00:17:20.785 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.785 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.785 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.043 { 00:17:21.043 "cntlid": 139, 00:17:21.043 "qid": 0, 00:17:21.043 "state": "enabled", 00:17:21.043 "thread": "nvmf_tgt_poll_group_000", 00:17:21.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:21.043 "listen_address": { 00:17:21.043 "trtype": "TCP", 00:17:21.043 "adrfam": "IPv4", 00:17:21.043 "traddr": "10.0.0.2", 00:17:21.043 "trsvcid": "4420" 00:17:21.043 }, 00:17:21.043 "peer_address": { 00:17:21.043 "trtype": "TCP", 00:17:21.043 "adrfam": "IPv4", 00:17:21.043 "traddr": "10.0.0.1", 00:17:21.043 "trsvcid": "48884" 00:17:21.043 }, 00:17:21.043 "auth": { 00:17:21.043 "state": "completed", 00:17:21.043 "digest": "sha512", 00:17:21.043 "dhgroup": "ffdhe8192" 00:17:21.043 } 00:17:21.043 } 00:17:21.043 ]' 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.043 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.302 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:17:21.302 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: --dhchap-ctrl-secret DHHC-1:02:ZTI3ZjdiZjIxMjkzN2Y3YjYzNTZiNjE5ZGQ2OGRlMjQ3OGJmOGZkZjMzNzAwYTUySC4Amw==: 00:17:22.236 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.236 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:22.236 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.236 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.236 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.236 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.236 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.236 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.494 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.428 00:17:23.428 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.428 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.428 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.686 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.686 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.686 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.686 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.686 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.686 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.686 { 00:17:23.686 "cntlid": 141, 00:17:23.686 "qid": 0, 00:17:23.686 "state": "enabled", 00:17:23.686 "thread": "nvmf_tgt_poll_group_000", 00:17:23.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:23.686 "listen_address": { 00:17:23.686 "trtype": "TCP", 00:17:23.686 "adrfam": "IPv4", 00:17:23.686 "traddr": "10.0.0.2", 00:17:23.686 "trsvcid": "4420" 00:17:23.686 }, 00:17:23.686 "peer_address": { 00:17:23.686 "trtype": "TCP", 00:17:23.686 "adrfam": "IPv4", 00:17:23.686 "traddr": "10.0.0.1", 00:17:23.686 "trsvcid": "48912" 00:17:23.686 }, 00:17:23.686 "auth": { 00:17:23.686 "state": "completed", 00:17:23.686 "digest": "sha512", 00:17:23.686 "dhgroup": "ffdhe8192" 00:17:23.686 } 00:17:23.686 } 00:17:23.686 ]' 00:17:23.686 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.686 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.686 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.945 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.945 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.945 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.945 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.945 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.203 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:17:24.203 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:01:YjczZjAwZGJkNWYzZjlhZjM2ZjcxYzgwYzQxMGQwMDf9wpvs: 00:17:25.137 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.137 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.137 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.137 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.137 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.137 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.137 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:25.137 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.395 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.328 00:17:26.328 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.328 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.328 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.328 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.328 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.328 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.328 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.586 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.586 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.586 { 00:17:26.586 "cntlid": 143, 00:17:26.586 "qid": 0, 00:17:26.586 "state": "enabled", 00:17:26.586 "thread": "nvmf_tgt_poll_group_000", 00:17:26.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:26.586 "listen_address": { 00:17:26.586 "trtype": "TCP", 00:17:26.586 "adrfam": "IPv4", 00:17:26.586 "traddr": "10.0.0.2", 00:17:26.586 "trsvcid": "4420" 00:17:26.586 }, 00:17:26.586 "peer_address": { 00:17:26.586 "trtype": "TCP", 00:17:26.586 "adrfam": "IPv4", 00:17:26.586 "traddr": "10.0.0.1", 00:17:26.586 "trsvcid": "48938" 00:17:26.586 }, 00:17:26.586 "auth": { 00:17:26.586 "state": "completed", 00:17:26.586 "digest": "sha512", 00:17:26.586 "dhgroup": "ffdhe8192" 00:17:26.586 } 00:17:26.586 } 00:17:26.586 ]' 00:17:26.586 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.586 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.586 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.586 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.586 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.586 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.586 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.587 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.844 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:26.844 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.779 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.037 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.971 00:17:28.971 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.971 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.971 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.229 { 00:17:29.229 "cntlid": 145, 00:17:29.229 "qid": 0, 00:17:29.229 "state": "enabled", 00:17:29.229 "thread": "nvmf_tgt_poll_group_000", 00:17:29.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:29.229 "listen_address": { 00:17:29.229 "trtype": "TCP", 00:17:29.229 "adrfam": "IPv4", 00:17:29.229 "traddr": "10.0.0.2", 00:17:29.229 "trsvcid": "4420" 00:17:29.229 }, 00:17:29.229 "peer_address": { 00:17:29.229 "trtype": "TCP", 00:17:29.229 "adrfam": "IPv4", 00:17:29.229 "traddr": "10.0.0.1", 00:17:29.229 "trsvcid": "48972" 00:17:29.229 }, 00:17:29.229 "auth": { 00:17:29.229 "state": "completed", 00:17:29.229 "digest": "sha512", 00:17:29.229 "dhgroup": "ffdhe8192" 00:17:29.229 } 00:17:29.229 } 00:17:29.229 ]' 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.229 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.797 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:17:29.798 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NjQ5YjNmOGIxZTBlY2NmYTc2NzllZDk0ODdmZDhhZGZlMjk3N2VkMmU1ODMxNTZmvAGXQg==: --dhchap-ctrl-secret DHHC-1:03:ZDI3Yzk3OTIyNzUyODAzOTMxNWM3NGIzMTcxOGUzNmRkYzY1MzZhZjgyY2E5MzRjODY2YzFhNTk5NWUyZThmNmJ10Pg=: 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:30.731 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:31.296 request: 00:17:31.296 { 00:17:31.296 "name": "nvme0", 00:17:31.296 "trtype": "tcp", 00:17:31.296 "traddr": "10.0.0.2", 00:17:31.296 "adrfam": "ipv4", 00:17:31.296 "trsvcid": "4420", 00:17:31.296 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:31.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:31.296 "prchk_reftag": false, 00:17:31.296 "prchk_guard": false, 00:17:31.296 "hdgst": false, 00:17:31.296 "ddgst": false, 00:17:31.296 "dhchap_key": "key2", 00:17:31.296 "allow_unrecognized_csi": false, 00:17:31.296 "method": "bdev_nvme_attach_controller", 00:17:31.296 "req_id": 1 00:17:31.296 } 00:17:31.296 Got JSON-RPC error response 00:17:31.296 response: 00:17:31.296 { 00:17:31.296 "code": -5, 00:17:31.296 "message": "Input/output error" 00:17:31.296 } 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.296 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:31.297 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:32.229 request: 00:17:32.229 { 00:17:32.229 "name": "nvme0", 00:17:32.229 "trtype": "tcp", 00:17:32.229 "traddr": "10.0.0.2", 00:17:32.229 "adrfam": "ipv4", 00:17:32.229 "trsvcid": "4420", 00:17:32.229 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:32.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:32.229 "prchk_reftag": false, 00:17:32.229 "prchk_guard": false, 00:17:32.229 "hdgst": false, 00:17:32.229 "ddgst": false, 00:17:32.229 "dhchap_key": "key1", 00:17:32.229 "dhchap_ctrlr_key": "ckey2", 00:17:32.229 "allow_unrecognized_csi": false, 00:17:32.229 "method": "bdev_nvme_attach_controller", 00:17:32.229 "req_id": 1 00:17:32.229 } 00:17:32.229 Got JSON-RPC error response 00:17:32.229 response: 00:17:32.229 { 00:17:32.229 "code": -5, 00:17:32.229 "message": "Input/output error" 00:17:32.229 } 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.229 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.160 request: 00:17:33.160 { 00:17:33.160 "name": "nvme0", 00:17:33.160 "trtype": "tcp", 00:17:33.160 "traddr": "10.0.0.2", 00:17:33.160 "adrfam": "ipv4", 00:17:33.160 "trsvcid": "4420", 00:17:33.160 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:33.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:33.160 "prchk_reftag": false, 00:17:33.160 "prchk_guard": false, 00:17:33.160 "hdgst": false, 00:17:33.160 "ddgst": false, 00:17:33.160 "dhchap_key": "key1", 00:17:33.160 "dhchap_ctrlr_key": "ckey1", 00:17:33.160 "allow_unrecognized_csi": false, 00:17:33.160 "method": "bdev_nvme_attach_controller", 00:17:33.160 "req_id": 1 00:17:33.160 } 00:17:33.160 Got JSON-RPC error response 00:17:33.160 response: 00:17:33.160 { 00:17:33.160 "code": -5, 00:17:33.160 "message": "Input/output error" 00:17:33.160 } 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3314474 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3314474 ']' 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3314474 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3314474 00:17:33.160 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:33.161 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:33.161 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3314474' 00:17:33.161 killing process with pid 3314474 00:17:33.161 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3314474 00:17:33.161 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3314474 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3337415 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3337415 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3337415 ']' 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:33.418 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.674 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:33.674 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:33.674 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.674 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:33.674 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.674 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.674 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:33.675 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3337415 00:17:33.675 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 3337415 ']' 00:17:33.675 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.675 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:33.675 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.675 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:33.675 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.932 null0 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.elz 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.pDa ]] 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pDa 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.932 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0Ke 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.32b ]] 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.32b 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.q2n 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.5zo ]] 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5zo 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Vy4 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.190 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.190 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.564 nvme0n1 00:17:35.564 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.564 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.564 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.822 { 00:17:35.822 "cntlid": 1, 00:17:35.822 "qid": 0, 00:17:35.822 "state": "enabled", 00:17:35.822 "thread": "nvmf_tgt_poll_group_000", 00:17:35.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:35.822 "listen_address": { 00:17:35.822 "trtype": "TCP", 00:17:35.822 "adrfam": "IPv4", 00:17:35.822 "traddr": "10.0.0.2", 00:17:35.822 "trsvcid": "4420" 00:17:35.822 }, 00:17:35.822 "peer_address": { 00:17:35.822 "trtype": "TCP", 00:17:35.822 "adrfam": "IPv4", 00:17:35.822 "traddr": "10.0.0.1", 00:17:35.822 "trsvcid": "56836" 00:17:35.822 }, 00:17:35.822 "auth": { 00:17:35.822 "state": "completed", 00:17:35.822 "digest": "sha512", 00:17:35.822 "dhgroup": "ffdhe8192" 00:17:35.822 } 00:17:35.822 } 00:17:35.822 ]' 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.822 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.080 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:36.080 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:37.015 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:37.580 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:37.580 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:37.580 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:37.580 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:37.580 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.580 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:37.580 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.580 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.580 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.581 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.839 request: 00:17:37.839 { 00:17:37.839 "name": "nvme0", 00:17:37.839 "trtype": "tcp", 00:17:37.839 "traddr": "10.0.0.2", 00:17:37.839 "adrfam": "ipv4", 00:17:37.839 "trsvcid": "4420", 00:17:37.839 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:37.839 "prchk_reftag": false, 00:17:37.839 "prchk_guard": false, 00:17:37.839 "hdgst": false, 00:17:37.839 "ddgst": false, 00:17:37.839 "dhchap_key": "key3", 00:17:37.839 "allow_unrecognized_csi": false, 00:17:37.839 "method": "bdev_nvme_attach_controller", 00:17:37.839 "req_id": 1 00:17:37.839 } 00:17:37.839 Got JSON-RPC error response 00:17:37.839 response: 00:17:37.839 { 00:17:37.839 "code": -5, 00:17:37.839 "message": "Input/output error" 00:17:37.839 } 00:17:37.839 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:37.839 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.839 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.839 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.839 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:37.839 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:37.839 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:37.839 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:38.096 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:38.096 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:38.096 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:38.096 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:38.097 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.097 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:38.097 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.097 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:38.097 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.097 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.353 request: 00:17:38.353 { 00:17:38.353 "name": "nvme0", 00:17:38.353 "trtype": "tcp", 00:17:38.353 "traddr": "10.0.0.2", 00:17:38.353 "adrfam": "ipv4", 00:17:38.353 "trsvcid": "4420", 00:17:38.353 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:38.353 "prchk_reftag": false, 00:17:38.353 "prchk_guard": false, 00:17:38.353 "hdgst": false, 00:17:38.353 "ddgst": false, 00:17:38.353 "dhchap_key": "key3", 00:17:38.353 "allow_unrecognized_csi": false, 00:17:38.353 "method": "bdev_nvme_attach_controller", 00:17:38.353 "req_id": 1 00:17:38.353 } 00:17:38.353 Got JSON-RPC error response 00:17:38.353 response: 00:17:38.353 { 00:17:38.353 "code": -5, 00:17:38.353 "message": "Input/output error" 00:17:38.353 } 00:17:38.353 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:38.353 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.353 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.353 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.353 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:38.353 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:38.353 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:38.353 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:38.353 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:38.354 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:38.611 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:39.176 request: 00:17:39.176 { 00:17:39.176 "name": "nvme0", 00:17:39.176 "trtype": "tcp", 00:17:39.176 "traddr": "10.0.0.2", 00:17:39.176 "adrfam": "ipv4", 00:17:39.177 "trsvcid": "4420", 00:17:39.177 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:39.177 "prchk_reftag": false, 00:17:39.177 "prchk_guard": false, 00:17:39.177 "hdgst": false, 00:17:39.177 "ddgst": false, 00:17:39.177 "dhchap_key": "key0", 00:17:39.177 "dhchap_ctrlr_key": "key1", 00:17:39.177 "allow_unrecognized_csi": false, 00:17:39.177 "method": "bdev_nvme_attach_controller", 00:17:39.177 "req_id": 1 00:17:39.177 } 00:17:39.177 Got JSON-RPC error response 00:17:39.177 response: 00:17:39.177 { 00:17:39.177 "code": -5, 00:17:39.177 "message": "Input/output error" 00:17:39.177 } 00:17:39.177 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:39.177 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.177 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.177 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.177 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:39.177 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:39.177 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:39.435 nvme0n1 00:17:39.435 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:39.435 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:39.435 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.693 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.693 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.693 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.951 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:39.951 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.951 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.952 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.952 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:39.952 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:39.952 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:41.326 nvme0n1 00:17:41.327 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:41.327 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:41.327 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.585 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.585 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:41.585 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.585 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.585 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.585 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:41.585 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.585 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:41.843 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.843 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:41.843 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: --dhchap-ctrl-secret DHHC-1:03:ZDQ3NDMzMzU1MGJhZmMwNGEyNTJmMzg3ZDM1MjcxOGQ3MmM5ZDcwOGJiN2E1Zjk0NmFlMjA1MzgxOTZjZTRmNbJ+tVY=: 00:17:42.774 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:42.774 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:42.774 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:42.774 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:42.774 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:42.774 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:42.774 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:42.774 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.774 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:43.031 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:43.962 request: 00:17:43.962 { 00:17:43.962 "name": "nvme0", 00:17:43.962 "trtype": "tcp", 00:17:43.962 "traddr": "10.0.0.2", 00:17:43.962 "adrfam": "ipv4", 00:17:43.962 "trsvcid": "4420", 00:17:43.962 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:43.962 "prchk_reftag": false, 00:17:43.962 "prchk_guard": false, 00:17:43.962 "hdgst": false, 00:17:43.962 "ddgst": false, 00:17:43.962 "dhchap_key": "key1", 00:17:43.962 "allow_unrecognized_csi": false, 00:17:43.962 "method": "bdev_nvme_attach_controller", 00:17:43.962 "req_id": 1 00:17:43.962 } 00:17:43.962 Got JSON-RPC error response 00:17:43.962 response: 00:17:43.962 { 00:17:43.962 "code": -5, 00:17:43.962 "message": "Input/output error" 00:17:43.962 } 00:17:43.962 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:43.962 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.962 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.962 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.962 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:43.962 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:43.962 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:45.378 nvme0n1 00:17:45.378 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:45.378 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:45.378 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.659 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.659 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.659 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.917 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:45.917 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.917 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.917 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.917 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:45.917 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:45.917 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:46.175 nvme0n1 00:17:46.175 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:46.175 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.175 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:46.433 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.433 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.433 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: '' 2s 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: ]] 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGZkY2MwMWRjNTNiMTM0ODE4YTE5ZTMzZWE3YTI0Y2M8SrIe: 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:46.691 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: 2s 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:49.214 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:49.215 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: 00:17:49.215 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:49.215 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:49.215 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:49.215 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: ]] 00:17:49.215 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTYwZWJjMDY1NmVjNGQ3M2Y1ZjVmZjIzZjYxMGVlZDNkYjBhNjk1OThlY2Q4YjZlqr4I8g==: 00:17:49.215 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:49.215 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:51.114 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:52.487 nvme0n1 00:17:52.487 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.487 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.487 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.487 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.487 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.487 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:53.053 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:53.053 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:53.053 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.311 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.311 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.311 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.311 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.580 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.580 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:53.581 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:53.838 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:53.838 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:53.838 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:54.097 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:55.031 request: 00:17:55.031 { 00:17:55.031 "name": "nvme0", 00:17:55.031 "dhchap_key": "key1", 00:17:55.031 "dhchap_ctrlr_key": "key3", 00:17:55.031 "method": "bdev_nvme_set_keys", 00:17:55.031 "req_id": 1 00:17:55.031 } 00:17:55.031 Got JSON-RPC error response 00:17:55.031 response: 00:17:55.031 { 00:17:55.031 "code": -13, 00:17:55.031 "message": "Permission denied" 00:17:55.031 } 00:17:55.031 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:55.031 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:55.031 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:55.031 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:55.031 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:55.031 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:55.031 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.031 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:55.031 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:56.405 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.777 nvme0n1 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:57.777 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:58.712 request: 00:17:58.712 { 00:17:58.712 "name": "nvme0", 00:17:58.712 "dhchap_key": "key2", 00:17:58.712 "dhchap_ctrlr_key": "key0", 00:17:58.712 "method": "bdev_nvme_set_keys", 00:17:58.712 "req_id": 1 00:17:58.712 } 00:17:58.712 Got JSON-RPC error response 00:17:58.712 response: 00:17:58.712 { 00:17:58.712 "code": -13, 00:17:58.712 "message": "Permission denied" 00:17:58.712 } 00:17:58.712 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:58.712 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:58.712 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:58.712 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:58.712 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:58.712 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.712 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:58.970 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:58.970 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:59.903 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:59.903 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:59.903 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3314499 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3314499 ']' 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3314499 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3314499 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3314499' 00:18:00.161 killing process with pid 3314499 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3314499 00:18:00.161 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3314499 00:18:00.725 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:00.726 rmmod nvme_tcp 00:18:00.726 rmmod nvme_fabrics 00:18:00.726 rmmod nvme_keyring 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3337415 ']' 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3337415 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 3337415 ']' 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 3337415 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3337415 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3337415' 00:18:00.726 killing process with pid 3337415 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 3337415 00:18:00.726 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 3337415 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.984 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.532 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:03.532 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.elz /tmp/spdk.key-sha256.0Ke /tmp/spdk.key-sha384.q2n /tmp/spdk.key-sha512.Vy4 /tmp/spdk.key-sha512.pDa /tmp/spdk.key-sha384.32b /tmp/spdk.key-sha256.5zo '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:03.532 00:18:03.532 real 3m32.383s 00:18:03.532 user 8m18.701s 00:18:03.532 sys 0m27.916s 00:18:03.532 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:03.532 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.532 ************************************ 00:18:03.532 END TEST nvmf_auth_target 00:18:03.532 ************************************ 00:18:03.532 09:08:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:03.532 09:08:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:03.532 09:08:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:03.532 09:08:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:03.532 09:08:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:03.532 ************************************ 00:18:03.532 START TEST nvmf_bdevio_no_huge 00:18:03.532 ************************************ 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:03.532 * Looking for test storage... 00:18:03.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.532 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.532 --rc genhtml_branch_coverage=1 00:18:03.532 --rc genhtml_function_coverage=1 00:18:03.532 --rc genhtml_legend=1 00:18:03.532 --rc geninfo_all_blocks=1 00:18:03.532 --rc geninfo_unexecuted_blocks=1 00:18:03.532 00:18:03.532 ' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:03.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.533 --rc genhtml_branch_coverage=1 00:18:03.533 --rc genhtml_function_coverage=1 00:18:03.533 --rc genhtml_legend=1 00:18:03.533 --rc geninfo_all_blocks=1 00:18:03.533 --rc geninfo_unexecuted_blocks=1 00:18:03.533 00:18:03.533 ' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:03.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.533 --rc genhtml_branch_coverage=1 00:18:03.533 --rc genhtml_function_coverage=1 00:18:03.533 --rc genhtml_legend=1 00:18:03.533 --rc geninfo_all_blocks=1 00:18:03.533 --rc geninfo_unexecuted_blocks=1 00:18:03.533 00:18:03.533 ' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:03.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.533 --rc genhtml_branch_coverage=1 00:18:03.533 --rc genhtml_function_coverage=1 00:18:03.533 --rc genhtml_legend=1 00:18:03.533 --rc geninfo_all_blocks=1 00:18:03.533 --rc geninfo_unexecuted_blocks=1 00:18:03.533 00:18:03.533 ' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:03.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:03.533 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:05.436 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:05.436 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:05.436 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:05.437 Found net devices under 0000:09:00.0: cvl_0_0 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:05.437 Found net devices under 0000:09:00.1: cvl_0_1 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.437 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:05.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:18:05.696 00:18:05.696 --- 10.0.0.2 ping statistics --- 00:18:05.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.696 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:18:05.696 00:18:05.696 --- 10.0.0.1 ping statistics --- 00:18:05.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.696 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3342677 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3342677 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3342677 ']' 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:05.696 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.696 [2024-11-20 09:08:42.546140] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:05.696 [2024-11-20 09:08:42.546225] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:05.696 [2024-11-20 09:08:42.624381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:05.696 [2024-11-20 09:08:42.682916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.696 [2024-11-20 09:08:42.682969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.696 [2024-11-20 09:08:42.683008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.696 [2024-11-20 09:08:42.683020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.696 [2024-11-20 09:08:42.683030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.696 [2024-11-20 09:08:42.684150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:05.696 [2024-11-20 09:08:42.684214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:05.696 [2024-11-20 09:08:42.684264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:05.696 [2024-11-20 09:08:42.684267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.954 [2024-11-20 09:08:42.847457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.954 Malloc0 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.954 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:05.955 [2024-11-20 09:08:42.886091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:05.955 { 00:18:05.955 "params": { 00:18:05.955 "name": "Nvme$subsystem", 00:18:05.955 "trtype": "$TEST_TRANSPORT", 00:18:05.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:05.955 "adrfam": "ipv4", 00:18:05.955 "trsvcid": "$NVMF_PORT", 00:18:05.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:05.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:05.955 "hdgst": ${hdgst:-false}, 00:18:05.955 "ddgst": ${ddgst:-false} 00:18:05.955 }, 00:18:05.955 "method": "bdev_nvme_attach_controller" 00:18:05.955 } 00:18:05.955 EOF 00:18:05.955 )") 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:05.955 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:05.955 "params": { 00:18:05.955 "name": "Nvme1", 00:18:05.955 "trtype": "tcp", 00:18:05.955 "traddr": "10.0.0.2", 00:18:05.955 "adrfam": "ipv4", 00:18:05.955 "trsvcid": "4420", 00:18:05.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.955 "hdgst": false, 00:18:05.955 "ddgst": false 00:18:05.955 }, 00:18:05.955 "method": "bdev_nvme_attach_controller" 00:18:05.955 }' 00:18:05.955 [2024-11-20 09:08:42.936533] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:05.955 [2024-11-20 09:08:42.936627] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3342821 ] 00:18:06.214 [2024-11-20 09:08:43.008116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:06.214 [2024-11-20 09:08:43.074314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.214 [2024-11-20 09:08:43.074351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.214 [2024-11-20 09:08:43.074356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.472 I/O targets: 00:18:06.472 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:06.472 00:18:06.472 00:18:06.472 CUnit - A unit testing framework for C - Version 2.1-3 00:18:06.472 http://cunit.sourceforge.net/ 00:18:06.472 00:18:06.472 00:18:06.472 Suite: bdevio tests on: Nvme1n1 00:18:06.472 Test: blockdev write read block ...passed 00:18:06.472 Test: blockdev write zeroes read block ...passed 00:18:06.472 Test: blockdev write zeroes read no split ...passed 00:18:06.472 Test: blockdev write zeroes read split ...passed 00:18:06.472 Test: blockdev write zeroes read split partial ...passed 00:18:06.472 Test: blockdev reset ...[2024-11-20 09:08:43.430744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:06.472 [2024-11-20 09:08:43.430864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee36e0 (9): Bad file descriptor 00:18:06.472 [2024-11-20 09:08:43.449782] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:06.472 passed 00:18:06.472 Test: blockdev write read 8 blocks ...passed 00:18:06.472 Test: blockdev write read size > 128k ...passed 00:18:06.472 Test: blockdev write read invalid size ...passed 00:18:06.730 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:06.730 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:06.730 Test: blockdev write read max offset ...passed 00:18:06.730 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:06.730 Test: blockdev writev readv 8 blocks ...passed 00:18:06.730 Test: blockdev writev readv 30 x 1block ...passed 00:18:06.730 Test: blockdev writev readv block ...passed 00:18:06.730 Test: blockdev writev readv size > 128k ...passed 00:18:06.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:06.730 Test: blockdev comparev and writev ...[2024-11-20 09:08:43.704859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:06.730 [2024-11-20 09:08:43.704898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.730 [2024-11-20 09:08:43.704923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:06.730 [2024-11-20 09:08:43.704940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:06.730 [2024-11-20 09:08:43.705291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:06.730 [2024-11-20 09:08:43.705325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:06.730 [2024-11-20 09:08:43.705349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:06.730 [2024-11-20 09:08:43.705366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:06.730 [2024-11-20 09:08:43.705713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:06.730 [2024-11-20 09:08:43.705738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:06.730 [2024-11-20 09:08:43.705760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:06.730 [2024-11-20 09:08:43.705776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:06.730 [2024-11-20 09:08:43.706121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:06.730 [2024-11-20 09:08:43.706145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:06.730 [2024-11-20 09:08:43.706167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:06.730 [2024-11-20 09:08:43.706183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:06.730 passed 00:18:06.988 Test: blockdev nvme passthru rw ...passed 00:18:06.988 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:08:43.789568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:06.988 [2024-11-20 09:08:43.789597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:06.988 [2024-11-20 09:08:43.789738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:06.988 [2024-11-20 09:08:43.789768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:06.988 [2024-11-20 09:08:43.789914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:06.988 [2024-11-20 09:08:43.789938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:06.988 [2024-11-20 09:08:43.790087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:06.988 [2024-11-20 09:08:43.790112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:06.988 passed 00:18:06.988 Test: blockdev nvme admin passthru ...passed 00:18:06.988 Test: blockdev copy ...passed 00:18:06.988 00:18:06.988 Run Summary: Type Total Ran Passed Failed Inactive 00:18:06.988 suites 1 1 n/a 0 0 00:18:06.988 tests 23 23 23 0 0 00:18:06.988 asserts 152 152 152 0 n/a 00:18:06.988 00:18:06.988 Elapsed time = 1.142 seconds 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.247 rmmod nvme_tcp 00:18:07.247 rmmod nvme_fabrics 00:18:07.247 rmmod nvme_keyring 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3342677 ']' 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3342677 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3342677 ']' 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3342677 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3342677 00:18:07.247 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:07.504 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:07.504 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3342677' 00:18:07.504 killing process with pid 3342677 00:18:07.504 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3342677 00:18:07.504 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3342677 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.764 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.674 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:09.674 00:18:09.674 real 0m6.692s 00:18:09.674 user 0m10.508s 00:18:09.674 sys 0m2.661s 00:18:09.674 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:09.674 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:09.674 ************************************ 00:18:09.674 END TEST nvmf_bdevio_no_huge 00:18:09.674 ************************************ 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.934 ************************************ 00:18:09.934 START TEST nvmf_tls 00:18:09.934 ************************************ 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:09.934 * Looking for test storage... 00:18:09.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:09.934 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:09.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.935 --rc genhtml_branch_coverage=1 00:18:09.935 --rc genhtml_function_coverage=1 00:18:09.935 --rc genhtml_legend=1 00:18:09.935 --rc geninfo_all_blocks=1 00:18:09.935 --rc geninfo_unexecuted_blocks=1 00:18:09.935 00:18:09.935 ' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:09.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.935 --rc genhtml_branch_coverage=1 00:18:09.935 --rc genhtml_function_coverage=1 00:18:09.935 --rc genhtml_legend=1 00:18:09.935 --rc geninfo_all_blocks=1 00:18:09.935 --rc geninfo_unexecuted_blocks=1 00:18:09.935 00:18:09.935 ' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:09.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.935 --rc genhtml_branch_coverage=1 00:18:09.935 --rc genhtml_function_coverage=1 00:18:09.935 --rc genhtml_legend=1 00:18:09.935 --rc geninfo_all_blocks=1 00:18:09.935 --rc geninfo_unexecuted_blocks=1 00:18:09.935 00:18:09.935 ' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:09.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.935 --rc genhtml_branch_coverage=1 00:18:09.935 --rc genhtml_function_coverage=1 00:18:09.935 --rc genhtml_legend=1 00:18:09.935 --rc geninfo_all_blocks=1 00:18:09.935 --rc geninfo_unexecuted_blocks=1 00:18:09.935 00:18:09.935 ' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:09.935 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:12.470 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:12.470 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:12.470 Found net devices under 0000:09:00.0: cvl_0_0 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:12.470 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:12.471 Found net devices under 0000:09:00.1: cvl_0_1 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:12.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:18:12.471 00:18:12.471 --- 10.0.0.2 ping statistics --- 00:18:12.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.471 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:18:12.471 00:18:12.471 --- 10.0.0.1 ping statistics --- 00:18:12.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.471 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3344909 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3344909 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3344909 ']' 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:12.471 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.471 [2024-11-20 09:08:49.300183] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:12.471 [2024-11-20 09:08:49.300259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.471 [2024-11-20 09:08:49.372904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.471 [2024-11-20 09:08:49.428771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.471 [2024-11-20 09:08:49.428825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.471 [2024-11-20 09:08:49.428859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.471 [2024-11-20 09:08:49.428871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.471 [2024-11-20 09:08:49.428882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.471 [2024-11-20 09:08:49.429487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.730 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.730 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:12.730 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.730 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:12.730 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.730 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.730 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:12.730 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:12.988 true 00:18:12.988 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:12.988 09:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:13.246 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:13.246 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:13.246 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:13.505 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:13.505 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:13.764 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:13.764 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:13.764 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:14.022 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:14.022 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:14.282 09:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:14.282 09:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:14.282 09:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:14.282 09:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:14.541 09:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:14.541 09:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:14.541 09:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:14.798 09:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:14.798 09:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:15.056 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:15.056 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:15.056 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:15.315 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:15.315 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:15.573 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:15.573 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:15.573 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:15.573 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:15.573 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:15.573 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:15.573 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:15.573 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:15.573 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.vVrSOQmaJp 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Mu04h2m32V 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.vVrSOQmaJp 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Mu04h2m32V 00:18:15.831 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:16.089 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:16.347 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.vVrSOQmaJp 00:18:16.347 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vVrSOQmaJp 00:18:16.347 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:16.605 [2024-11-20 09:08:53.598922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.605 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:17.171 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:17.171 [2024-11-20 09:08:54.152460] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.171 [2024-11-20 09:08:54.152744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.171 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.429 malloc0 00:18:17.429 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.687 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vVrSOQmaJp 00:18:17.945 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.203 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.vVrSOQmaJp 00:18:30.473 Initializing NVMe Controllers 00:18:30.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:30.473 Initialization complete. Launching workers. 00:18:30.473 ======================================================== 00:18:30.473 Latency(us) 00:18:30.473 Device Information : IOPS MiB/s Average min max 00:18:30.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8663.79 33.84 7389.35 1159.47 9056.90 00:18:30.473 ======================================================== 00:18:30.473 Total : 8663.79 33.84 7389.35 1159.47 9056.90 00:18:30.473 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vVrSOQmaJp 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vVrSOQmaJp 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3346925 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3346925 /var/tmp/bdevperf.sock 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3346925 ']' 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.473 [2024-11-20 09:09:05.386997] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:30.473 [2024-11-20 09:09:05.387086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346925 ] 00:18:30.473 [2024-11-20 09:09:05.460494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.473 [2024-11-20 09:09:05.523849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vVrSOQmaJp 00:18:30.473 09:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.473 [2024-11-20 09:09:06.149595] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.473 TLSTESTn1 00:18:30.473 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:30.473 Running I/O for 10 seconds... 00:18:31.406 3459.00 IOPS, 13.51 MiB/s [2024-11-20T08:09:09.805Z] 3506.50 IOPS, 13.70 MiB/s [2024-11-20T08:09:10.737Z] 3525.67 IOPS, 13.77 MiB/s [2024-11-20T08:09:11.671Z] 3538.75 IOPS, 13.82 MiB/s [2024-11-20T08:09:12.604Z] 3554.20 IOPS, 13.88 MiB/s [2024-11-20T08:09:13.538Z] 3554.67 IOPS, 13.89 MiB/s [2024-11-20T08:09:14.471Z] 3560.14 IOPS, 13.91 MiB/s [2024-11-20T08:09:15.405Z] 3559.00 IOPS, 13.90 MiB/s [2024-11-20T08:09:16.778Z] 3560.56 IOPS, 13.91 MiB/s [2024-11-20T08:09:16.778Z] 3548.40 IOPS, 13.86 MiB/s 00:18:39.749 Latency(us) 00:18:39.749 [2024-11-20T08:09:16.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.749 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:39.749 Verification LBA range: start 0x0 length 0x2000 00:18:39.749 TLSTESTn1 : 10.02 3553.84 13.88 0.00 0.00 35955.73 7330.32 42525.58 00:18:39.749 [2024-11-20T08:09:16.778Z] =================================================================================================================== 00:18:39.749 [2024-11-20T08:09:16.778Z] Total : 3553.84 13.88 0.00 0.00 35955.73 7330.32 42525.58 00:18:39.749 { 00:18:39.749 "results": [ 00:18:39.749 { 00:18:39.749 "job": "TLSTESTn1", 00:18:39.749 "core_mask": "0x4", 00:18:39.749 "workload": "verify", 00:18:39.749 "status": "finished", 00:18:39.749 "verify_range": { 00:18:39.749 "start": 0, 00:18:39.749 "length": 8192 00:18:39.749 }, 00:18:39.749 "queue_depth": 128, 00:18:39.749 "io_size": 4096, 00:18:39.749 "runtime": 10.01987, 00:18:39.749 "iops": 3553.838522855087, 00:18:39.749 "mibps": 13.882181729902683, 00:18:39.749 "io_failed": 0, 00:18:39.749 "io_timeout": 0, 00:18:39.749 "avg_latency_us": 35955.72710369726, 00:18:39.749 "min_latency_us": 7330.322962962963, 00:18:39.749 "max_latency_us": 42525.58222222222 00:18:39.749 } 00:18:39.749 ], 00:18:39.749 "core_count": 1 00:18:39.749 } 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3346925 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3346925 ']' 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3346925 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3346925 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3346925' 00:18:39.749 killing process with pid 3346925 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3346925 00:18:39.749 Received shutdown signal, test time was about 10.000000 seconds 00:18:39.749 00:18:39.749 Latency(us) 00:18:39.749 [2024-11-20T08:09:16.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.749 [2024-11-20T08:09:16.778Z] =================================================================================================================== 00:18:39.749 [2024-11-20T08:09:16.778Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3346925 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mu04h2m32V 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mu04h2m32V 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:39.749 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mu04h2m32V 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Mu04h2m32V 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3348746 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3348746 /var/tmp/bdevperf.sock 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3348746 ']' 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:39.750 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.750 [2024-11-20 09:09:16.746108] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:39.750 [2024-11-20 09:09:16.746209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348746 ] 00:18:40.008 [2024-11-20 09:09:16.819844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.008 [2024-11-20 09:09:16.883894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.008 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:40.008 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:40.008 09:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Mu04h2m32V 00:18:40.267 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:40.525 [2024-11-20 09:09:17.504115] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.525 [2024-11-20 09:09:17.515526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:40.525 [2024-11-20 09:09:17.516342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7192c0 (107): Transport endpoint is not connected 00:18:40.525 [2024-11-20 09:09:17.517332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7192c0 (9): Bad file descriptor 00:18:40.525 [2024-11-20 09:09:17.518331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:40.525 [2024-11-20 09:09:17.518352] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:40.525 [2024-11-20 09:09:17.518366] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:40.525 [2024-11-20 09:09:17.518386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:40.525 request: 00:18:40.525 { 00:18:40.525 "name": "TLSTEST", 00:18:40.525 "trtype": "tcp", 00:18:40.525 "traddr": "10.0.0.2", 00:18:40.525 "adrfam": "ipv4", 00:18:40.525 "trsvcid": "4420", 00:18:40.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.525 "prchk_reftag": false, 00:18:40.525 "prchk_guard": false, 00:18:40.525 "hdgst": false, 00:18:40.525 "ddgst": false, 00:18:40.525 "psk": "key0", 00:18:40.525 "allow_unrecognized_csi": false, 00:18:40.525 "method": "bdev_nvme_attach_controller", 00:18:40.525 "req_id": 1 00:18:40.525 } 00:18:40.525 Got JSON-RPC error response 00:18:40.525 response: 00:18:40.525 { 00:18:40.525 "code": -5, 00:18:40.525 "message": "Input/output error" 00:18:40.525 } 00:18:40.525 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3348746 00:18:40.525 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3348746 ']' 00:18:40.525 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3348746 00:18:40.525 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:40.525 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:40.525 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3348746 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3348746' 00:18:40.783 killing process with pid 3348746 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3348746 00:18:40.783 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.783 00:18:40.783 Latency(us) 00:18:40.783 [2024-11-20T08:09:17.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.783 [2024-11-20T08:09:17.812Z] =================================================================================================================== 00:18:40.783 [2024-11-20T08:09:17.812Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3348746 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vVrSOQmaJp 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vVrSOQmaJp 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.vVrSOQmaJp 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vVrSOQmaJp 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:40.783 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3348890 00:18:40.784 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.784 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.784 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3348890 /var/tmp/bdevperf.sock 00:18:40.784 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3348890 ']' 00:18:40.784 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.784 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:40.784 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.784 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:40.784 09:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.042 [2024-11-20 09:09:17.848034] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:41.042 [2024-11-20 09:09:17.848120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348890 ] 00:18:41.042 [2024-11-20 09:09:17.915730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.042 [2024-11-20 09:09:17.972505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.300 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:41.300 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:41.300 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vVrSOQmaJp 00:18:41.557 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:41.816 [2024-11-20 09:09:18.607691] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.816 [2024-11-20 09:09:18.618269] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:41.816 [2024-11-20 09:09:18.618300] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:41.816 [2024-11-20 09:09:18.618359] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:41.816 [2024-11-20 09:09:18.618830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b52c0 (107): Transport endpoint is not connected 00:18:41.816 [2024-11-20 09:09:18.619820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b52c0 (9): Bad file descriptor 00:18:41.816 [2024-11-20 09:09:18.620820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:41.816 [2024-11-20 09:09:18.620839] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:41.816 [2024-11-20 09:09:18.620866] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:41.816 [2024-11-20 09:09:18.620884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:41.816 request: 00:18:41.816 { 00:18:41.816 "name": "TLSTEST", 00:18:41.816 "trtype": "tcp", 00:18:41.816 "traddr": "10.0.0.2", 00:18:41.816 "adrfam": "ipv4", 00:18:41.816 "trsvcid": "4420", 00:18:41.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.816 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:41.816 "prchk_reftag": false, 00:18:41.816 "prchk_guard": false, 00:18:41.816 "hdgst": false, 00:18:41.816 "ddgst": false, 00:18:41.816 "psk": "key0", 00:18:41.816 "allow_unrecognized_csi": false, 00:18:41.816 "method": "bdev_nvme_attach_controller", 00:18:41.816 "req_id": 1 00:18:41.816 } 00:18:41.816 Got JSON-RPC error response 00:18:41.816 response: 00:18:41.816 { 00:18:41.816 "code": -5, 00:18:41.816 "message": "Input/output error" 00:18:41.816 } 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3348890 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3348890 ']' 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3348890 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3348890 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3348890' 00:18:41.816 killing process with pid 3348890 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3348890 00:18:41.816 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.816 00:18:41.816 Latency(us) 00:18:41.816 [2024-11-20T08:09:18.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.816 [2024-11-20T08:09:18.845Z] =================================================================================================================== 00:18:41.816 [2024-11-20T08:09:18.845Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.816 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3348890 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vVrSOQmaJp 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vVrSOQmaJp 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.vVrSOQmaJp 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vVrSOQmaJp 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3349030 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3349030 /var/tmp/bdevperf.sock 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3349030 ']' 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:42.075 09:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.075 [2024-11-20 09:09:18.914545] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:42.075 [2024-11-20 09:09:18.914640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349030 ] 00:18:42.075 [2024-11-20 09:09:18.980010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.075 [2024-11-20 09:09:19.037854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.333 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:42.333 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:42.333 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vVrSOQmaJp 00:18:42.592 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:42.850 [2024-11-20 09:09:19.672247] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.850 [2024-11-20 09:09:19.681090] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:42.850 [2024-11-20 09:09:19.681122] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:42.850 [2024-11-20 09:09:19.681175] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:42.850 [2024-11-20 09:09:19.681459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20972c0 (107): Transport endpoint is not connected 00:18:42.850 [2024-11-20 09:09:19.682449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20972c0 (9): Bad file descriptor 00:18:42.850 [2024-11-20 09:09:19.683449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:42.850 [2024-11-20 09:09:19.683469] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:42.850 [2024-11-20 09:09:19.683483] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:42.850 [2024-11-20 09:09:19.683501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:42.850 request: 00:18:42.850 { 00:18:42.850 "name": "TLSTEST", 00:18:42.850 "trtype": "tcp", 00:18:42.850 "traddr": "10.0.0.2", 00:18:42.850 "adrfam": "ipv4", 00:18:42.850 "trsvcid": "4420", 00:18:42.850 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:42.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.850 "prchk_reftag": false, 00:18:42.850 "prchk_guard": false, 00:18:42.850 "hdgst": false, 00:18:42.850 "ddgst": false, 00:18:42.850 "psk": "key0", 00:18:42.850 "allow_unrecognized_csi": false, 00:18:42.850 "method": "bdev_nvme_attach_controller", 00:18:42.850 "req_id": 1 00:18:42.850 } 00:18:42.850 Got JSON-RPC error response 00:18:42.850 response: 00:18:42.850 { 00:18:42.850 "code": -5, 00:18:42.850 "message": "Input/output error" 00:18:42.850 } 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3349030 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3349030 ']' 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3349030 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3349030 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3349030' 00:18:42.850 killing process with pid 3349030 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3349030 00:18:42.850 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.850 00:18:42.850 Latency(us) 00:18:42.850 [2024-11-20T08:09:19.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.850 [2024-11-20T08:09:19.879Z] =================================================================================================================== 00:18:42.850 [2024-11-20T08:09:19.879Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:42.850 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3349030 00:18:43.108 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:43.108 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:43.108 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3349173 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3349173 /var/tmp/bdevperf.sock 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3349173 ']' 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:43.109 09:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.109 [2024-11-20 09:09:20.011718] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:43.109 [2024-11-20 09:09:20.011805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349173 ] 00:18:43.109 [2024-11-20 09:09:20.082636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.366 [2024-11-20 09:09:20.145272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.366 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:43.366 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:43.366 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:43.625 [2024-11-20 09:09:20.523167] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:43.625 [2024-11-20 09:09:20.523209] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:43.625 request: 00:18:43.625 { 00:18:43.625 "name": "key0", 00:18:43.625 "path": "", 00:18:43.625 "method": "keyring_file_add_key", 00:18:43.625 "req_id": 1 00:18:43.625 } 00:18:43.625 Got JSON-RPC error response 00:18:43.625 response: 00:18:43.625 { 00:18:43.625 "code": -1, 00:18:43.625 "message": "Operation not permitted" 00:18:43.625 } 00:18:43.625 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:43.883 [2024-11-20 09:09:20.787988] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.883 [2024-11-20 09:09:20.788048] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:43.883 request: 00:18:43.883 { 00:18:43.883 "name": "TLSTEST", 00:18:43.883 "trtype": "tcp", 00:18:43.883 "traddr": "10.0.0.2", 00:18:43.883 "adrfam": "ipv4", 00:18:43.883 "trsvcid": "4420", 00:18:43.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.883 "prchk_reftag": false, 00:18:43.883 "prchk_guard": false, 00:18:43.883 "hdgst": false, 00:18:43.883 "ddgst": false, 00:18:43.883 "psk": "key0", 00:18:43.883 "allow_unrecognized_csi": false, 00:18:43.883 "method": "bdev_nvme_attach_controller", 00:18:43.883 "req_id": 1 00:18:43.883 } 00:18:43.883 Got JSON-RPC error response 00:18:43.883 response: 00:18:43.883 { 00:18:43.883 "code": -126, 00:18:43.883 "message": "Required key not available" 00:18:43.883 } 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3349173 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3349173 ']' 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3349173 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3349173 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3349173' 00:18:43.883 killing process with pid 3349173 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3349173 00:18:43.883 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.883 00:18:43.883 Latency(us) 00:18:43.883 [2024-11-20T08:09:20.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.883 [2024-11-20T08:09:20.912Z] =================================================================================================================== 00:18:43.883 [2024-11-20T08:09:20.912Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:43.883 09:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3349173 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3344909 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3344909 ']' 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3344909 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3344909 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3344909' 00:18:44.141 killing process with pid 3344909 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3344909 00:18:44.141 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3344909 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.PrAtAO5Ubf 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.PrAtAO5Ubf 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3349436 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3349436 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3349436 ']' 00:18:44.399 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.400 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:44.400 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.400 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:44.400 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.658 [2024-11-20 09:09:21.436709] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:44.658 [2024-11-20 09:09:21.436812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.658 [2024-11-20 09:09:21.508881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.658 [2024-11-20 09:09:21.566421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.658 [2024-11-20 09:09:21.566472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.658 [2024-11-20 09:09:21.566502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.658 [2024-11-20 09:09:21.566515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.658 [2024-11-20 09:09:21.566525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.658 [2024-11-20 09:09:21.567140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.658 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:44.658 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:44.658 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.658 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.658 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.916 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.916 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.PrAtAO5Ubf 00:18:44.916 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PrAtAO5Ubf 00:18:44.916 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:45.175 [2024-11-20 09:09:21.946942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.175 09:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:45.433 09:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:45.691 [2024-11-20 09:09:22.480423] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.691 [2024-11-20 09:09:22.480683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.691 09:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:45.949 malloc0 00:18:45.949 09:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:46.207 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PrAtAO5Ubf 00:18:46.465 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PrAtAO5Ubf 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PrAtAO5Ubf 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3349726 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3349726 /var/tmp/bdevperf.sock 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3349726 ']' 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:46.724 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.724 [2024-11-20 09:09:23.641055] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:46.724 [2024-11-20 09:09:23.641135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349726 ] 00:18:46.724 [2024-11-20 09:09:23.706037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.983 [2024-11-20 09:09:23.764557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.983 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:46.983 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:46.983 09:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PrAtAO5Ubf 00:18:47.240 09:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.498 [2024-11-20 09:09:24.391968] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.498 TLSTESTn1 00:18:47.498 09:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:47.756 Running I/O for 10 seconds... 00:18:49.623 3286.00 IOPS, 12.84 MiB/s [2024-11-20T08:09:28.026Z] 3358.50 IOPS, 13.12 MiB/s [2024-11-20T08:09:28.959Z] 3436.00 IOPS, 13.42 MiB/s [2024-11-20T08:09:29.892Z] 3433.75 IOPS, 13.41 MiB/s [2024-11-20T08:09:30.827Z] 3462.20 IOPS, 13.52 MiB/s [2024-11-20T08:09:31.760Z] 3489.33 IOPS, 13.63 MiB/s [2024-11-20T08:09:32.693Z] 3493.00 IOPS, 13.64 MiB/s [2024-11-20T08:09:33.626Z] 3513.88 IOPS, 13.73 MiB/s [2024-11-20T08:09:34.997Z] 3528.11 IOPS, 13.78 MiB/s [2024-11-20T08:09:34.997Z] 3531.10 IOPS, 13.79 MiB/s 00:18:57.968 Latency(us) 00:18:57.968 [2024-11-20T08:09:34.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.968 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:57.968 Verification LBA range: start 0x0 length 0x2000 00:18:57.968 TLSTESTn1 : 10.02 3537.41 13.82 0.00 0.00 36128.58 6407.96 35340.89 00:18:57.968 [2024-11-20T08:09:34.997Z] =================================================================================================================== 00:18:57.968 [2024-11-20T08:09:34.997Z] Total : 3537.41 13.82 0.00 0.00 36128.58 6407.96 35340.89 00:18:57.968 { 00:18:57.968 "results": [ 00:18:57.968 { 00:18:57.968 "job": "TLSTESTn1", 00:18:57.968 "core_mask": "0x4", 00:18:57.968 "workload": "verify", 00:18:57.968 "status": "finished", 00:18:57.968 "verify_range": { 00:18:57.968 "start": 0, 00:18:57.968 "length": 8192 00:18:57.968 }, 00:18:57.968 "queue_depth": 128, 00:18:57.968 "io_size": 4096, 00:18:57.968 "runtime": 10.018337, 00:18:57.968 "iops": 3537.413444965966, 00:18:57.968 "mibps": 13.818021269398304, 00:18:57.968 "io_failed": 0, 00:18:57.968 "io_timeout": 0, 00:18:57.968 "avg_latency_us": 36128.57705298515, 00:18:57.968 "min_latency_us": 6407.964444444445, 00:18:57.968 "max_latency_us": 35340.89481481481 00:18:57.968 } 00:18:57.968 ], 00:18:57.968 "core_count": 1 00:18:57.968 } 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3349726 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3349726 ']' 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3349726 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3349726 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3349726' 00:18:57.968 killing process with pid 3349726 00:18:57.968 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3349726 00:18:57.968 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.968 00:18:57.968 Latency(us) 00:18:57.968 [2024-11-20T08:09:34.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.968 [2024-11-20T08:09:34.997Z] =================================================================================================================== 00:18:57.968 [2024-11-20T08:09:34.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3349726 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.PrAtAO5Ubf 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PrAtAO5Ubf 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PrAtAO5Ubf 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PrAtAO5Ubf 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PrAtAO5Ubf 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3350938 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3350938 /var/tmp/bdevperf.sock 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3350938 ']' 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.969 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.969 [2024-11-20 09:09:34.963967] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:57.969 [2024-11-20 09:09:34.964052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350938 ] 00:18:58.226 [2024-11-20 09:09:35.036363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.226 [2024-11-20 09:09:35.096488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.226 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.226 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:58.226 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PrAtAO5Ubf 00:18:58.565 [2024-11-20 09:09:35.460433] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PrAtAO5Ubf': 0100666 00:18:58.565 [2024-11-20 09:09:35.460477] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:58.565 request: 00:18:58.565 { 00:18:58.565 "name": "key0", 00:18:58.565 "path": "/tmp/tmp.PrAtAO5Ubf", 00:18:58.565 "method": "keyring_file_add_key", 00:18:58.565 "req_id": 1 00:18:58.565 } 00:18:58.565 Got JSON-RPC error response 00:18:58.565 response: 00:18:58.565 { 00:18:58.565 "code": -1, 00:18:58.565 "message": "Operation not permitted" 00:18:58.565 } 00:18:58.565 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:58.822 [2024-11-20 09:09:35.725231] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.822 [2024-11-20 09:09:35.725289] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:58.822 request: 00:18:58.822 { 00:18:58.822 "name": "TLSTEST", 00:18:58.822 "trtype": "tcp", 00:18:58.822 "traddr": "10.0.0.2", 00:18:58.822 "adrfam": "ipv4", 00:18:58.822 "trsvcid": "4420", 00:18:58.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.822 "prchk_reftag": false, 00:18:58.822 "prchk_guard": false, 00:18:58.822 "hdgst": false, 00:18:58.822 "ddgst": false, 00:18:58.822 "psk": "key0", 00:18:58.822 "allow_unrecognized_csi": false, 00:18:58.822 "method": "bdev_nvme_attach_controller", 00:18:58.822 "req_id": 1 00:18:58.822 } 00:18:58.822 Got JSON-RPC error response 00:18:58.822 response: 00:18:58.822 { 00:18:58.822 "code": -126, 00:18:58.822 "message": "Required key not available" 00:18:58.822 } 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3350938 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3350938 ']' 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3350938 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3350938 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3350938' 00:18:58.822 killing process with pid 3350938 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3350938 00:18:58.822 Received shutdown signal, test time was about 10.000000 seconds 00:18:58.822 00:18:58.822 Latency(us) 00:18:58.822 [2024-11-20T08:09:35.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.822 [2024-11-20T08:09:35.851Z] =================================================================================================================== 00:18:58.822 [2024-11-20T08:09:35.851Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:58.822 09:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3350938 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3349436 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3349436 ']' 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3349436 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3349436 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3349436' 00:18:59.079 killing process with pid 3349436 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3349436 00:18:59.079 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3349436 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3351196 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3351196 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3351196 ']' 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:59.338 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.338 [2024-11-20 09:09:36.333392] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:18:59.338 [2024-11-20 09:09:36.333497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.596 [2024-11-20 09:09:36.404039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.596 [2024-11-20 09:09:36.461341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.596 [2024-11-20 09:09:36.461396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.596 [2024-11-20 09:09:36.461426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.596 [2024-11-20 09:09:36.461438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.596 [2024-11-20 09:09:36.461448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.596 [2024-11-20 09:09:36.462043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.PrAtAO5Ubf 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.PrAtAO5Ubf 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.PrAtAO5Ubf 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PrAtAO5Ubf 00:18:59.597 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:59.856 [2024-11-20 09:09:36.844191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.856 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:00.148 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:00.420 [2024-11-20 09:09:37.385653] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.420 [2024-11-20 09:09:37.385903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.420 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:00.676 malloc0 00:19:00.676 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:00.934 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PrAtAO5Ubf 00:19:01.191 [2024-11-20 09:09:38.182929] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PrAtAO5Ubf': 0100666 00:19:01.191 [2024-11-20 09:09:38.182964] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:01.191 request: 00:19:01.191 { 00:19:01.191 "name": "key0", 00:19:01.191 "path": "/tmp/tmp.PrAtAO5Ubf", 00:19:01.191 "method": "keyring_file_add_key", 00:19:01.191 "req_id": 1 00:19:01.191 } 00:19:01.191 Got JSON-RPC error response 00:19:01.191 response: 00:19:01.191 { 00:19:01.191 "code": -1, 00:19:01.191 "message": "Operation not permitted" 00:19:01.191 } 00:19:01.191 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:01.449 [2024-11-20 09:09:38.447688] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:01.449 [2024-11-20 09:09:38.447756] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:01.449 request: 00:19:01.449 { 00:19:01.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.449 "host": "nqn.2016-06.io.spdk:host1", 00:19:01.449 "psk": "key0", 00:19:01.449 "method": "nvmf_subsystem_add_host", 00:19:01.449 "req_id": 1 00:19:01.449 } 00:19:01.449 Got JSON-RPC error response 00:19:01.449 response: 00:19:01.449 { 00:19:01.449 "code": -32603, 00:19:01.449 "message": "Internal error" 00:19:01.449 } 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3351196 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3351196 ']' 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3351196 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:01.449 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3351196 00:19:01.707 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:01.707 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:01.707 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3351196' 00:19:01.707 killing process with pid 3351196 00:19:01.707 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3351196 00:19:01.707 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3351196 00:19:01.965 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.PrAtAO5Ubf 00:19:01.965 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:01.965 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.965 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3351523 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3351523 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3351523 ']' 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:01.966 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.966 [2024-11-20 09:09:38.791053] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:01.966 [2024-11-20 09:09:38.791134] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.966 [2024-11-20 09:09:38.858346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.966 [2024-11-20 09:09:38.910814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.966 [2024-11-20 09:09:38.910870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.966 [2024-11-20 09:09:38.910897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.966 [2024-11-20 09:09:38.910908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.966 [2024-11-20 09:09:38.910917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.966 [2024-11-20 09:09:38.911473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.224 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:02.224 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:02.224 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.224 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.224 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.224 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.224 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.PrAtAO5Ubf 00:19:02.224 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PrAtAO5Ubf 00:19:02.224 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:02.481 [2024-11-20 09:09:39.283025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.481 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:02.739 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:02.997 [2024-11-20 09:09:39.840566] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:02.997 [2024-11-20 09:09:39.840835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.997 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.255 malloc0 00:19:03.255 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.513 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PrAtAO5Ubf 00:19:03.770 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3351806 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3351806 /var/tmp/bdevperf.sock 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3351806 ']' 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:04.028 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.028 [2024-11-20 09:09:40.998476] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:04.028 [2024-11-20 09:09:40.998561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3351806 ] 00:19:04.286 [2024-11-20 09:09:41.066346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.286 [2024-11-20 09:09:41.123474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.286 09:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:04.286 09:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:04.286 09:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PrAtAO5Ubf 00:19:04.544 09:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.803 [2024-11-20 09:09:41.757653] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.061 TLSTESTn1 00:19:05.061 09:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:05.320 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:05.320 "subsystems": [ 00:19:05.320 { 00:19:05.320 "subsystem": "keyring", 00:19:05.320 "config": [ 00:19:05.320 { 00:19:05.320 "method": "keyring_file_add_key", 00:19:05.320 "params": { 00:19:05.320 "name": "key0", 00:19:05.320 "path": "/tmp/tmp.PrAtAO5Ubf" 00:19:05.320 } 00:19:05.320 } 00:19:05.320 ] 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "subsystem": "iobuf", 00:19:05.320 "config": [ 00:19:05.320 { 00:19:05.320 "method": "iobuf_set_options", 00:19:05.320 "params": { 00:19:05.320 "small_pool_count": 8192, 00:19:05.320 "large_pool_count": 1024, 00:19:05.320 "small_bufsize": 8192, 00:19:05.320 "large_bufsize": 135168, 00:19:05.320 "enable_numa": false 00:19:05.320 } 00:19:05.320 } 00:19:05.320 ] 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "subsystem": "sock", 00:19:05.320 "config": [ 00:19:05.320 { 00:19:05.320 "method": "sock_set_default_impl", 00:19:05.320 "params": { 00:19:05.320 "impl_name": "posix" 00:19:05.320 } 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "method": "sock_impl_set_options", 00:19:05.320 "params": { 00:19:05.320 "impl_name": "ssl", 00:19:05.320 "recv_buf_size": 4096, 00:19:05.320 "send_buf_size": 4096, 00:19:05.320 "enable_recv_pipe": true, 00:19:05.320 "enable_quickack": false, 00:19:05.320 "enable_placement_id": 0, 00:19:05.320 "enable_zerocopy_send_server": true, 00:19:05.320 "enable_zerocopy_send_client": false, 00:19:05.320 "zerocopy_threshold": 0, 00:19:05.320 "tls_version": 0, 00:19:05.320 "enable_ktls": false 00:19:05.320 } 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "method": "sock_impl_set_options", 00:19:05.320 "params": { 00:19:05.320 "impl_name": "posix", 00:19:05.320 "recv_buf_size": 2097152, 00:19:05.320 "send_buf_size": 2097152, 00:19:05.320 "enable_recv_pipe": true, 00:19:05.320 "enable_quickack": false, 00:19:05.320 "enable_placement_id": 0, 00:19:05.320 "enable_zerocopy_send_server": true, 00:19:05.320 "enable_zerocopy_send_client": false, 00:19:05.320 "zerocopy_threshold": 0, 00:19:05.320 "tls_version": 0, 00:19:05.320 "enable_ktls": false 00:19:05.320 } 00:19:05.320 } 00:19:05.320 ] 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "subsystem": "vmd", 00:19:05.320 "config": [] 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "subsystem": "accel", 00:19:05.320 "config": [ 00:19:05.320 { 00:19:05.320 "method": "accel_set_options", 00:19:05.320 "params": { 00:19:05.320 "small_cache_size": 128, 00:19:05.320 "large_cache_size": 16, 00:19:05.320 "task_count": 2048, 00:19:05.320 "sequence_count": 2048, 00:19:05.320 "buf_count": 2048 00:19:05.320 } 00:19:05.320 } 00:19:05.320 ] 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "subsystem": "bdev", 00:19:05.320 "config": [ 00:19:05.320 { 00:19:05.320 "method": "bdev_set_options", 00:19:05.320 "params": { 00:19:05.320 "bdev_io_pool_size": 65535, 00:19:05.320 "bdev_io_cache_size": 256, 00:19:05.320 "bdev_auto_examine": true, 00:19:05.320 "iobuf_small_cache_size": 128, 00:19:05.320 "iobuf_large_cache_size": 16 00:19:05.320 } 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "method": "bdev_raid_set_options", 00:19:05.320 "params": { 00:19:05.320 "process_window_size_kb": 1024, 00:19:05.320 "process_max_bandwidth_mb_sec": 0 00:19:05.320 } 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "method": "bdev_iscsi_set_options", 00:19:05.320 "params": { 00:19:05.320 "timeout_sec": 30 00:19:05.320 } 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "method": "bdev_nvme_set_options", 00:19:05.320 "params": { 00:19:05.320 "action_on_timeout": "none", 00:19:05.320 "timeout_us": 0, 00:19:05.320 "timeout_admin_us": 0, 00:19:05.320 "keep_alive_timeout_ms": 10000, 00:19:05.320 "arbitration_burst": 0, 00:19:05.320 "low_priority_weight": 0, 00:19:05.320 "medium_priority_weight": 0, 00:19:05.320 "high_priority_weight": 0, 00:19:05.320 "nvme_adminq_poll_period_us": 10000, 00:19:05.320 "nvme_ioq_poll_period_us": 0, 00:19:05.320 "io_queue_requests": 0, 00:19:05.320 "delay_cmd_submit": true, 00:19:05.320 "transport_retry_count": 4, 00:19:05.320 "bdev_retry_count": 3, 00:19:05.320 "transport_ack_timeout": 0, 00:19:05.320 "ctrlr_loss_timeout_sec": 0, 00:19:05.320 "reconnect_delay_sec": 0, 00:19:05.320 "fast_io_fail_timeout_sec": 0, 00:19:05.320 "disable_auto_failback": false, 00:19:05.320 "generate_uuids": false, 00:19:05.320 "transport_tos": 0, 00:19:05.320 "nvme_error_stat": false, 00:19:05.320 "rdma_srq_size": 0, 00:19:05.320 "io_path_stat": false, 00:19:05.320 "allow_accel_sequence": false, 00:19:05.320 "rdma_max_cq_size": 0, 00:19:05.320 "rdma_cm_event_timeout_ms": 0, 00:19:05.320 "dhchap_digests": [ 00:19:05.320 "sha256", 00:19:05.320 "sha384", 00:19:05.320 "sha512" 00:19:05.320 ], 00:19:05.320 "dhchap_dhgroups": [ 00:19:05.320 "null", 00:19:05.320 "ffdhe2048", 00:19:05.320 "ffdhe3072", 00:19:05.320 "ffdhe4096", 00:19:05.320 "ffdhe6144", 00:19:05.320 "ffdhe8192" 00:19:05.320 ] 00:19:05.320 } 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "method": "bdev_nvme_set_hotplug", 00:19:05.320 "params": { 00:19:05.320 "period_us": 100000, 00:19:05.320 "enable": false 00:19:05.320 } 00:19:05.320 }, 00:19:05.320 { 00:19:05.320 "method": "bdev_malloc_create", 00:19:05.320 "params": { 00:19:05.320 "name": "malloc0", 00:19:05.321 "num_blocks": 8192, 00:19:05.321 "block_size": 4096, 00:19:05.321 "physical_block_size": 4096, 00:19:05.321 "uuid": "8c78c45b-a56b-4896-83e4-300fd93594dc", 00:19:05.321 "optimal_io_boundary": 0, 00:19:05.321 "md_size": 0, 00:19:05.321 "dif_type": 0, 00:19:05.321 "dif_is_head_of_md": false, 00:19:05.321 "dif_pi_format": 0 00:19:05.321 } 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "method": "bdev_wait_for_examine" 00:19:05.321 } 00:19:05.321 ] 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "subsystem": "nbd", 00:19:05.321 "config": [] 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "subsystem": "scheduler", 00:19:05.321 "config": [ 00:19:05.321 { 00:19:05.321 "method": "framework_set_scheduler", 00:19:05.321 "params": { 00:19:05.321 "name": "static" 00:19:05.321 } 00:19:05.321 } 00:19:05.321 ] 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "subsystem": "nvmf", 00:19:05.321 "config": [ 00:19:05.321 { 00:19:05.321 "method": "nvmf_set_config", 00:19:05.321 "params": { 00:19:05.321 "discovery_filter": "match_any", 00:19:05.321 "admin_cmd_passthru": { 00:19:05.321 "identify_ctrlr": false 00:19:05.321 }, 00:19:05.321 "dhchap_digests": [ 00:19:05.321 "sha256", 00:19:05.321 "sha384", 00:19:05.321 "sha512" 00:19:05.321 ], 00:19:05.321 "dhchap_dhgroups": [ 00:19:05.321 "null", 00:19:05.321 "ffdhe2048", 00:19:05.321 "ffdhe3072", 00:19:05.321 "ffdhe4096", 00:19:05.321 "ffdhe6144", 00:19:05.321 "ffdhe8192" 00:19:05.321 ] 00:19:05.321 } 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "method": "nvmf_set_max_subsystems", 00:19:05.321 "params": { 00:19:05.321 "max_subsystems": 1024 00:19:05.321 } 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "method": "nvmf_set_crdt", 00:19:05.321 "params": { 00:19:05.321 "crdt1": 0, 00:19:05.321 "crdt2": 0, 00:19:05.321 "crdt3": 0 00:19:05.321 } 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "method": "nvmf_create_transport", 00:19:05.321 "params": { 00:19:05.321 "trtype": "TCP", 00:19:05.321 "max_queue_depth": 128, 00:19:05.321 "max_io_qpairs_per_ctrlr": 127, 00:19:05.321 "in_capsule_data_size": 4096, 00:19:05.321 "max_io_size": 131072, 00:19:05.321 "io_unit_size": 131072, 00:19:05.321 "max_aq_depth": 128, 00:19:05.321 "num_shared_buffers": 511, 00:19:05.321 "buf_cache_size": 4294967295, 00:19:05.321 "dif_insert_or_strip": false, 00:19:05.321 "zcopy": false, 00:19:05.321 "c2h_success": false, 00:19:05.321 "sock_priority": 0, 00:19:05.321 "abort_timeout_sec": 1, 00:19:05.321 "ack_timeout": 0, 00:19:05.321 "data_wr_pool_size": 0 00:19:05.321 } 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "method": "nvmf_create_subsystem", 00:19:05.321 "params": { 00:19:05.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.321 "allow_any_host": false, 00:19:05.321 "serial_number": "SPDK00000000000001", 00:19:05.321 "model_number": "SPDK bdev Controller", 00:19:05.321 "max_namespaces": 10, 00:19:05.321 "min_cntlid": 1, 00:19:05.321 "max_cntlid": 65519, 00:19:05.321 "ana_reporting": false 00:19:05.321 } 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "method": "nvmf_subsystem_add_host", 00:19:05.321 "params": { 00:19:05.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.321 "host": "nqn.2016-06.io.spdk:host1", 00:19:05.321 "psk": "key0" 00:19:05.321 } 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "method": "nvmf_subsystem_add_ns", 00:19:05.321 "params": { 00:19:05.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.321 "namespace": { 00:19:05.321 "nsid": 1, 00:19:05.321 "bdev_name": "malloc0", 00:19:05.321 "nguid": "8C78C45BA56B489683E4300FD93594DC", 00:19:05.321 "uuid": "8c78c45b-a56b-4896-83e4-300fd93594dc", 00:19:05.321 "no_auto_visible": false 00:19:05.321 } 00:19:05.321 } 00:19:05.321 }, 00:19:05.321 { 00:19:05.321 "method": "nvmf_subsystem_add_listener", 00:19:05.321 "params": { 00:19:05.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.321 "listen_address": { 00:19:05.321 "trtype": "TCP", 00:19:05.321 "adrfam": "IPv4", 00:19:05.321 "traddr": "10.0.0.2", 00:19:05.321 "trsvcid": "4420" 00:19:05.321 }, 00:19:05.321 "secure_channel": true 00:19:05.321 } 00:19:05.321 } 00:19:05.321 ] 00:19:05.321 } 00:19:05.321 ] 00:19:05.321 }' 00:19:05.321 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:05.580 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:05.580 "subsystems": [ 00:19:05.580 { 00:19:05.580 "subsystem": "keyring", 00:19:05.580 "config": [ 00:19:05.580 { 00:19:05.580 "method": "keyring_file_add_key", 00:19:05.580 "params": { 00:19:05.580 "name": "key0", 00:19:05.580 "path": "/tmp/tmp.PrAtAO5Ubf" 00:19:05.580 } 00:19:05.580 } 00:19:05.580 ] 00:19:05.580 }, 00:19:05.580 { 00:19:05.580 "subsystem": "iobuf", 00:19:05.580 "config": [ 00:19:05.580 { 00:19:05.580 "method": "iobuf_set_options", 00:19:05.580 "params": { 00:19:05.580 "small_pool_count": 8192, 00:19:05.580 "large_pool_count": 1024, 00:19:05.580 "small_bufsize": 8192, 00:19:05.580 "large_bufsize": 135168, 00:19:05.580 "enable_numa": false 00:19:05.580 } 00:19:05.580 } 00:19:05.580 ] 00:19:05.580 }, 00:19:05.580 { 00:19:05.580 "subsystem": "sock", 00:19:05.580 "config": [ 00:19:05.580 { 00:19:05.580 "method": "sock_set_default_impl", 00:19:05.580 "params": { 00:19:05.580 "impl_name": "posix" 00:19:05.580 } 00:19:05.580 }, 00:19:05.580 { 00:19:05.580 "method": "sock_impl_set_options", 00:19:05.580 "params": { 00:19:05.580 "impl_name": "ssl", 00:19:05.580 "recv_buf_size": 4096, 00:19:05.580 "send_buf_size": 4096, 00:19:05.580 "enable_recv_pipe": true, 00:19:05.580 "enable_quickack": false, 00:19:05.580 "enable_placement_id": 0, 00:19:05.580 "enable_zerocopy_send_server": true, 00:19:05.580 "enable_zerocopy_send_client": false, 00:19:05.580 "zerocopy_threshold": 0, 00:19:05.580 "tls_version": 0, 00:19:05.580 "enable_ktls": false 00:19:05.580 } 00:19:05.580 }, 00:19:05.580 { 00:19:05.580 "method": "sock_impl_set_options", 00:19:05.580 "params": { 00:19:05.580 "impl_name": "posix", 00:19:05.580 "recv_buf_size": 2097152, 00:19:05.580 "send_buf_size": 2097152, 00:19:05.580 "enable_recv_pipe": true, 00:19:05.580 "enable_quickack": false, 00:19:05.580 "enable_placement_id": 0, 00:19:05.580 "enable_zerocopy_send_server": true, 00:19:05.580 "enable_zerocopy_send_client": false, 00:19:05.580 "zerocopy_threshold": 0, 00:19:05.580 "tls_version": 0, 00:19:05.580 "enable_ktls": false 00:19:05.580 } 00:19:05.580 } 00:19:05.580 ] 00:19:05.580 }, 00:19:05.580 { 00:19:05.580 "subsystem": "vmd", 00:19:05.580 "config": [] 00:19:05.580 }, 00:19:05.580 { 00:19:05.580 "subsystem": "accel", 00:19:05.580 "config": [ 00:19:05.580 { 00:19:05.580 "method": "accel_set_options", 00:19:05.580 "params": { 00:19:05.580 "small_cache_size": 128, 00:19:05.580 "large_cache_size": 16, 00:19:05.580 "task_count": 2048, 00:19:05.580 "sequence_count": 2048, 00:19:05.580 "buf_count": 2048 00:19:05.580 } 00:19:05.580 } 00:19:05.580 ] 00:19:05.580 }, 00:19:05.580 { 00:19:05.580 "subsystem": "bdev", 00:19:05.580 "config": [ 00:19:05.580 { 00:19:05.580 "method": "bdev_set_options", 00:19:05.580 "params": { 00:19:05.580 "bdev_io_pool_size": 65535, 00:19:05.580 "bdev_io_cache_size": 256, 00:19:05.580 "bdev_auto_examine": true, 00:19:05.580 "iobuf_small_cache_size": 128, 00:19:05.580 "iobuf_large_cache_size": 16 00:19:05.580 } 00:19:05.580 }, 00:19:05.580 { 00:19:05.580 "method": "bdev_raid_set_options", 00:19:05.580 "params": { 00:19:05.580 "process_window_size_kb": 1024, 00:19:05.580 "process_max_bandwidth_mb_sec": 0 00:19:05.580 } 00:19:05.580 }, 00:19:05.580 { 00:19:05.580 "method": "bdev_iscsi_set_options", 00:19:05.580 "params": { 00:19:05.580 "timeout_sec": 30 00:19:05.580 } 00:19:05.581 }, 00:19:05.581 { 00:19:05.581 "method": "bdev_nvme_set_options", 00:19:05.581 "params": { 00:19:05.581 "action_on_timeout": "none", 00:19:05.581 "timeout_us": 0, 00:19:05.581 "timeout_admin_us": 0, 00:19:05.581 "keep_alive_timeout_ms": 10000, 00:19:05.581 "arbitration_burst": 0, 00:19:05.581 "low_priority_weight": 0, 00:19:05.581 "medium_priority_weight": 0, 00:19:05.581 "high_priority_weight": 0, 00:19:05.581 "nvme_adminq_poll_period_us": 10000, 00:19:05.581 "nvme_ioq_poll_period_us": 0, 00:19:05.581 "io_queue_requests": 512, 00:19:05.581 "delay_cmd_submit": true, 00:19:05.581 "transport_retry_count": 4, 00:19:05.581 "bdev_retry_count": 3, 00:19:05.581 "transport_ack_timeout": 0, 00:19:05.581 "ctrlr_loss_timeout_sec": 0, 00:19:05.581 "reconnect_delay_sec": 0, 00:19:05.581 "fast_io_fail_timeout_sec": 0, 00:19:05.581 "disable_auto_failback": false, 00:19:05.581 "generate_uuids": false, 00:19:05.581 "transport_tos": 0, 00:19:05.581 "nvme_error_stat": false, 00:19:05.581 "rdma_srq_size": 0, 00:19:05.581 "io_path_stat": false, 00:19:05.581 "allow_accel_sequence": false, 00:19:05.581 "rdma_max_cq_size": 0, 00:19:05.581 "rdma_cm_event_timeout_ms": 0, 00:19:05.581 "dhchap_digests": [ 00:19:05.581 "sha256", 00:19:05.581 "sha384", 00:19:05.581 "sha512" 00:19:05.581 ], 00:19:05.581 "dhchap_dhgroups": [ 00:19:05.581 "null", 00:19:05.581 "ffdhe2048", 00:19:05.581 "ffdhe3072", 00:19:05.581 "ffdhe4096", 00:19:05.581 "ffdhe6144", 00:19:05.581 "ffdhe8192" 00:19:05.581 ] 00:19:05.581 } 00:19:05.581 }, 00:19:05.581 { 00:19:05.581 "method": "bdev_nvme_attach_controller", 00:19:05.581 "params": { 00:19:05.581 "name": "TLSTEST", 00:19:05.581 "trtype": "TCP", 00:19:05.581 "adrfam": "IPv4", 00:19:05.581 "traddr": "10.0.0.2", 00:19:05.581 "trsvcid": "4420", 00:19:05.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.581 "prchk_reftag": false, 00:19:05.581 "prchk_guard": false, 00:19:05.581 "ctrlr_loss_timeout_sec": 0, 00:19:05.581 "reconnect_delay_sec": 0, 00:19:05.581 "fast_io_fail_timeout_sec": 0, 00:19:05.581 "psk": "key0", 00:19:05.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.581 "hdgst": false, 00:19:05.581 "ddgst": false, 00:19:05.581 "multipath": "multipath" 00:19:05.581 } 00:19:05.581 }, 00:19:05.581 { 00:19:05.581 "method": "bdev_nvme_set_hotplug", 00:19:05.581 "params": { 00:19:05.581 "period_us": 100000, 00:19:05.581 "enable": false 00:19:05.581 } 00:19:05.581 }, 00:19:05.581 { 00:19:05.581 "method": "bdev_wait_for_examine" 00:19:05.581 } 00:19:05.581 ] 00:19:05.581 }, 00:19:05.581 { 00:19:05.581 "subsystem": "nbd", 00:19:05.581 "config": [] 00:19:05.581 } 00:19:05.581 ] 00:19:05.581 }' 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3351806 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3351806 ']' 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3351806 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3351806 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3351806' 00:19:05.581 killing process with pid 3351806 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3351806 00:19:05.581 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.581 00:19:05.581 Latency(us) 00:19:05.581 [2024-11-20T08:09:42.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.581 [2024-11-20T08:09:42.610Z] =================================================================================================================== 00:19:05.581 [2024-11-20T08:09:42.610Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.581 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3351806 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3351523 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3351523 ']' 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3351523 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3351523 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3351523' 00:19:05.839 killing process with pid 3351523 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3351523 00:19:05.839 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3351523 00:19:06.098 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:06.098 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.098 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:06.098 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:06.098 "subsystems": [ 00:19:06.098 { 00:19:06.098 "subsystem": "keyring", 00:19:06.098 "config": [ 00:19:06.098 { 00:19:06.098 "method": "keyring_file_add_key", 00:19:06.098 "params": { 00:19:06.098 "name": "key0", 00:19:06.098 "path": "/tmp/tmp.PrAtAO5Ubf" 00:19:06.098 } 00:19:06.098 } 00:19:06.098 ] 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "subsystem": "iobuf", 00:19:06.098 "config": [ 00:19:06.098 { 00:19:06.098 "method": "iobuf_set_options", 00:19:06.098 "params": { 00:19:06.098 "small_pool_count": 8192, 00:19:06.098 "large_pool_count": 1024, 00:19:06.098 "small_bufsize": 8192, 00:19:06.098 "large_bufsize": 135168, 00:19:06.098 "enable_numa": false 00:19:06.098 } 00:19:06.098 } 00:19:06.098 ] 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "subsystem": "sock", 00:19:06.098 "config": [ 00:19:06.098 { 00:19:06.098 "method": "sock_set_default_impl", 00:19:06.098 "params": { 00:19:06.098 "impl_name": "posix" 00:19:06.098 } 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "method": "sock_impl_set_options", 00:19:06.098 "params": { 00:19:06.098 "impl_name": "ssl", 00:19:06.098 "recv_buf_size": 4096, 00:19:06.098 "send_buf_size": 4096, 00:19:06.098 "enable_recv_pipe": true, 00:19:06.098 "enable_quickack": false, 00:19:06.098 "enable_placement_id": 0, 00:19:06.098 "enable_zerocopy_send_server": true, 00:19:06.098 "enable_zerocopy_send_client": false, 00:19:06.098 "zerocopy_threshold": 0, 00:19:06.098 "tls_version": 0, 00:19:06.098 "enable_ktls": false 00:19:06.098 } 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "method": "sock_impl_set_options", 00:19:06.098 "params": { 00:19:06.098 "impl_name": "posix", 00:19:06.098 "recv_buf_size": 2097152, 00:19:06.098 "send_buf_size": 2097152, 00:19:06.098 "enable_recv_pipe": true, 00:19:06.098 "enable_quickack": false, 00:19:06.098 "enable_placement_id": 0, 00:19:06.098 "enable_zerocopy_send_server": true, 00:19:06.098 "enable_zerocopy_send_client": false, 00:19:06.098 "zerocopy_threshold": 0, 00:19:06.098 "tls_version": 0, 00:19:06.098 "enable_ktls": false 00:19:06.098 } 00:19:06.098 } 00:19:06.098 ] 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "subsystem": "vmd", 00:19:06.098 "config": [] 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "subsystem": "accel", 00:19:06.098 "config": [ 00:19:06.098 { 00:19:06.098 "method": "accel_set_options", 00:19:06.098 "params": { 00:19:06.098 "small_cache_size": 128, 00:19:06.098 "large_cache_size": 16, 00:19:06.098 "task_count": 2048, 00:19:06.098 "sequence_count": 2048, 00:19:06.098 "buf_count": 2048 00:19:06.098 } 00:19:06.098 } 00:19:06.098 ] 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "subsystem": "bdev", 00:19:06.098 "config": [ 00:19:06.098 { 00:19:06.098 "method": "bdev_set_options", 00:19:06.098 "params": { 00:19:06.098 "bdev_io_pool_size": 65535, 00:19:06.098 "bdev_io_cache_size": 256, 00:19:06.098 "bdev_auto_examine": true, 00:19:06.098 "iobuf_small_cache_size": 128, 00:19:06.098 "iobuf_large_cache_size": 16 00:19:06.098 } 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "method": "bdev_raid_set_options", 00:19:06.098 "params": { 00:19:06.098 "process_window_size_kb": 1024, 00:19:06.098 "process_max_bandwidth_mb_sec": 0 00:19:06.098 } 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "method": "bdev_iscsi_set_options", 00:19:06.098 "params": { 00:19:06.098 "timeout_sec": 30 00:19:06.098 } 00:19:06.098 }, 00:19:06.098 { 00:19:06.098 "method": "bdev_nvme_set_options", 00:19:06.098 "params": { 00:19:06.098 "action_on_timeout": "none", 00:19:06.098 "timeout_us": 0, 00:19:06.098 "timeout_admin_us": 0, 00:19:06.098 "keep_alive_timeout_ms": 10000, 00:19:06.098 "arbitration_burst": 0, 00:19:06.098 "low_priority_weight": 0, 00:19:06.098 "medium_priority_weight": 0, 00:19:06.098 "high_priority_weight": 0, 00:19:06.098 "nvme_adminq_poll_period_us": 10000, 00:19:06.098 "nvme_ioq_poll_period_us": 0, 00:19:06.098 "io_queue_requests": 0, 00:19:06.098 "delay_cmd_submit": true, 00:19:06.098 "transport_retry_count": 4, 00:19:06.098 "bdev_retry_count": 3, 00:19:06.098 "transport_ack_timeout": 0, 00:19:06.098 "ctrlr_loss_timeout_sec": 0, 00:19:06.098 "reconnect_delay_sec": 0, 00:19:06.098 "fast_io_fail_timeout_sec": 0, 00:19:06.098 "disable_auto_failback": false, 00:19:06.098 "generate_uuids": false, 00:19:06.098 "transport_tos": 0, 00:19:06.098 "nvme_error_stat": false, 00:19:06.098 "rdma_srq_size": 0, 00:19:06.098 "io_path_stat": false, 00:19:06.098 "allow_accel_sequence": false, 00:19:06.098 "rdma_max_cq_size": 0, 00:19:06.098 "rdma_cm_event_timeout_ms": 0, 00:19:06.098 "dhchap_digests": [ 00:19:06.098 "sha256", 00:19:06.098 "sha384", 00:19:06.098 "sha512" 00:19:06.098 ], 00:19:06.099 "dhchap_dhgroups": [ 00:19:06.099 "null", 00:19:06.099 "ffdhe2048", 00:19:06.099 "ffdhe3072", 00:19:06.099 "ffdhe4096", 00:19:06.099 "ffdhe6144", 00:19:06.099 "ffdhe8192" 00:19:06.099 ] 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "bdev_nvme_set_hotplug", 00:19:06.099 "params": { 00:19:06.099 "period_us": 100000, 00:19:06.099 "enable": false 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "bdev_malloc_create", 00:19:06.099 "params": { 00:19:06.099 "name": "malloc0", 00:19:06.099 "num_blocks": 8192, 00:19:06.099 "block_size": 4096, 00:19:06.099 "physical_block_size": 4096, 00:19:06.099 "uuid": "8c78c45b-a56b-4896-83e4-300fd93594dc", 00:19:06.099 "optimal_io_boundary": 0, 00:19:06.099 "md_size": 0, 00:19:06.099 "dif_type": 0, 00:19:06.099 "dif_is_head_of_md": false, 00:19:06.099 "dif_pi_format": 0 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "bdev_wait_for_examine" 00:19:06.099 } 00:19:06.099 ] 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "subsystem": "nbd", 00:19:06.099 "config": [] 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "subsystem": "scheduler", 00:19:06.099 "config": [ 00:19:06.099 { 00:19:06.099 "method": "framework_set_scheduler", 00:19:06.099 "params": { 00:19:06.099 "name": "static" 00:19:06.099 } 00:19:06.099 } 00:19:06.099 ] 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "subsystem": "nvmf", 00:19:06.099 "config": [ 00:19:06.099 { 00:19:06.099 "method": "nvmf_set_config", 00:19:06.099 "params": { 00:19:06.099 "discovery_filter": "match_any", 00:19:06.099 "admin_cmd_passthru": { 00:19:06.099 "identify_ctrlr": false 00:19:06.099 }, 00:19:06.099 "dhchap_digests": [ 00:19:06.099 "sha256", 00:19:06.099 "sha384", 00:19:06.099 "sha512" 00:19:06.099 ], 00:19:06.099 "dhchap_dhgroups": [ 00:19:06.099 "null", 00:19:06.099 "ffdhe2048", 00:19:06.099 "ffdhe3072", 00:19:06.099 "ffdhe4096", 00:19:06.099 "ffdhe6144", 00:19:06.099 "ffdhe8192" 00:19:06.099 ] 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "nvmf_set_max_subsystems", 00:19:06.099 "params": { 00:19:06.099 "max_subsystems": 1024 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "nvmf_set_crdt", 00:19:06.099 "params": { 00:19:06.099 "crdt1": 0, 00:19:06.099 "crdt2": 0, 00:19:06.099 "crdt3": 0 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "nvmf_create_transport", 00:19:06.099 "params": { 00:19:06.099 "trtype": "TCP", 00:19:06.099 "max_queue_depth": 128, 00:19:06.099 "max_io_qpairs_per_ctrlr": 127, 00:19:06.099 "in_capsule_data_size": 4096, 00:19:06.099 "max_io_size": 131072, 00:19:06.099 "io_unit_size": 131072, 00:19:06.099 "max_aq_depth": 128, 00:19:06.099 "num_shared_buffers": 511, 00:19:06.099 "buf_cache_size": 4294967295, 00:19:06.099 "dif_insert_or_strip": false, 00:19:06.099 "zcopy": false, 00:19:06.099 "c2h_success": false, 00:19:06.099 "sock_priority": 0, 00:19:06.099 "abort_timeout_sec": 1, 00:19:06.099 "ack_timeout": 0, 00:19:06.099 "data_wr_pool_size": 0 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "nvmf_create_subsystem", 00:19:06.099 "params": { 00:19:06.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.099 "allow_any_host": false, 00:19:06.099 "serial_number": "SPDK00000000000001", 00:19:06.099 "model_number": "SPDK bdev Controller", 00:19:06.099 "max_namespaces": 10, 00:19:06.099 "min_cntlid": 1, 00:19:06.099 "max_cntlid": 65519, 00:19:06.099 "ana_reporting": false 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "nvmf_subsystem_add_host", 00:19:06.099 "params": { 00:19:06.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.099 "host": "nqn.2016-06.io.spdk:host1", 00:19:06.099 "psk": "key0" 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "nvmf_subsystem_add_ns", 00:19:06.099 "params": { 00:19:06.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.099 "namespace": { 00:19:06.099 "nsid": 1, 00:19:06.099 "bdev_name": "malloc0", 00:19:06.099 "nguid": "8C78C45BA56B489683E4300FD93594DC", 00:19:06.099 "uuid": "8c78c45b-a56b-4896-83e4-300fd93594dc", 00:19:06.099 "no_auto_visible": false 00:19:06.099 } 00:19:06.099 } 00:19:06.099 }, 00:19:06.099 { 00:19:06.099 "method": "nvmf_subsystem_add_listener", 00:19:06.099 "params": { 00:19:06.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.099 "listen_address": { 00:19:06.099 "trtype": "TCP", 00:19:06.099 "adrfam": "IPv4", 00:19:06.099 "traddr": "10.0.0.2", 00:19:06.099 "trsvcid": "4420" 00:19:06.099 }, 00:19:06.099 "secure_channel": true 00:19:06.099 } 00:19:06.099 } 00:19:06.099 ] 00:19:06.099 } 00:19:06.099 ] 00:19:06.099 }' 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3352085 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3352085 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3352085 ']' 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.099 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.357 [2024-11-20 09:09:43.128583] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:06.358 [2024-11-20 09:09:43.128692] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.358 [2024-11-20 09:09:43.198633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.358 [2024-11-20 09:09:43.254864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.358 [2024-11-20 09:09:43.254929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.358 [2024-11-20 09:09:43.254956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.358 [2024-11-20 09:09:43.254968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.358 [2024-11-20 09:09:43.254978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.358 [2024-11-20 09:09:43.255694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.616 [2024-11-20 09:09:43.503792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.616 [2024-11-20 09:09:43.535825] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.616 [2024-11-20 09:09:43.536072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3352235 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3352235 /var/tmp/bdevperf.sock 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3352235 ']' 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.182 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:07.182 "subsystems": [ 00:19:07.182 { 00:19:07.182 "subsystem": "keyring", 00:19:07.182 "config": [ 00:19:07.182 { 00:19:07.182 "method": "keyring_file_add_key", 00:19:07.182 "params": { 00:19:07.182 "name": "key0", 00:19:07.182 "path": "/tmp/tmp.PrAtAO5Ubf" 00:19:07.182 } 00:19:07.182 } 00:19:07.182 ] 00:19:07.182 }, 00:19:07.182 { 00:19:07.182 "subsystem": "iobuf", 00:19:07.182 "config": [ 00:19:07.182 { 00:19:07.182 "method": "iobuf_set_options", 00:19:07.182 "params": { 00:19:07.182 "small_pool_count": 8192, 00:19:07.182 "large_pool_count": 1024, 00:19:07.182 "small_bufsize": 8192, 00:19:07.182 "large_bufsize": 135168, 00:19:07.182 "enable_numa": false 00:19:07.182 } 00:19:07.182 } 00:19:07.182 ] 00:19:07.182 }, 00:19:07.182 { 00:19:07.182 "subsystem": "sock", 00:19:07.182 "config": [ 00:19:07.182 { 00:19:07.182 "method": "sock_set_default_impl", 00:19:07.182 "params": { 00:19:07.182 "impl_name": "posix" 00:19:07.182 } 00:19:07.182 }, 00:19:07.182 { 00:19:07.182 "method": "sock_impl_set_options", 00:19:07.182 "params": { 00:19:07.182 "impl_name": "ssl", 00:19:07.182 "recv_buf_size": 4096, 00:19:07.182 "send_buf_size": 4096, 00:19:07.182 "enable_recv_pipe": true, 00:19:07.182 "enable_quickack": false, 00:19:07.182 "enable_placement_id": 0, 00:19:07.182 "enable_zerocopy_send_server": true, 00:19:07.182 "enable_zerocopy_send_client": false, 00:19:07.182 "zerocopy_threshold": 0, 00:19:07.182 "tls_version": 0, 00:19:07.182 "enable_ktls": false 00:19:07.182 } 00:19:07.182 }, 00:19:07.182 { 00:19:07.182 "method": "sock_impl_set_options", 00:19:07.182 "params": { 00:19:07.182 "impl_name": "posix", 00:19:07.182 "recv_buf_size": 2097152, 00:19:07.182 "send_buf_size": 2097152, 00:19:07.182 "enable_recv_pipe": true, 00:19:07.182 "enable_quickack": false, 00:19:07.182 "enable_placement_id": 0, 00:19:07.182 "enable_zerocopy_send_server": true, 00:19:07.182 "enable_zerocopy_send_client": false, 00:19:07.183 "zerocopy_threshold": 0, 00:19:07.183 "tls_version": 0, 00:19:07.183 "enable_ktls": false 00:19:07.183 } 00:19:07.183 } 00:19:07.183 ] 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "subsystem": "vmd", 00:19:07.183 "config": [] 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "subsystem": "accel", 00:19:07.183 "config": [ 00:19:07.183 { 00:19:07.183 "method": "accel_set_options", 00:19:07.183 "params": { 00:19:07.183 "small_cache_size": 128, 00:19:07.183 "large_cache_size": 16, 00:19:07.183 "task_count": 2048, 00:19:07.183 "sequence_count": 2048, 00:19:07.183 "buf_count": 2048 00:19:07.183 } 00:19:07.183 } 00:19:07.183 ] 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "subsystem": "bdev", 00:19:07.183 "config": [ 00:19:07.183 { 00:19:07.183 "method": "bdev_set_options", 00:19:07.183 "params": { 00:19:07.183 "bdev_io_pool_size": 65535, 00:19:07.183 "bdev_io_cache_size": 256, 00:19:07.183 "bdev_auto_examine": true, 00:19:07.183 "iobuf_small_cache_size": 128, 00:19:07.183 "iobuf_large_cache_size": 16 00:19:07.183 } 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "method": "bdev_raid_set_options", 00:19:07.183 "params": { 00:19:07.183 "process_window_size_kb": 1024, 00:19:07.183 "process_max_bandwidth_mb_sec": 0 00:19:07.183 } 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "method": "bdev_iscsi_set_options", 00:19:07.183 "params": { 00:19:07.183 "timeout_sec": 30 00:19:07.183 } 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "method": "bdev_nvme_set_options", 00:19:07.183 "params": { 00:19:07.183 "action_on_timeout": "none", 00:19:07.183 "timeout_us": 0, 00:19:07.183 "timeout_admin_us": 0, 00:19:07.183 "keep_alive_timeout_ms": 10000, 00:19:07.183 "arbitration_burst": 0, 00:19:07.183 "low_priority_weight": 0, 00:19:07.183 "medium_priority_weight": 0, 00:19:07.183 "high_priority_weight": 0, 00:19:07.183 "nvme_adminq_poll_period_us": 10000, 00:19:07.183 "nvme_ioq_poll_period_us": 0, 00:19:07.183 "io_queue_requests": 512, 00:19:07.183 "delay_cmd_submit": true, 00:19:07.183 "transport_retry_count": 4, 00:19:07.183 "bdev_retry_count": 3, 00:19:07.183 "transport_ack_timeout": 0, 00:19:07.183 "ctrlr_loss_timeout_sec": 0, 00:19:07.183 "reconnect_delay_sec": 0, 00:19:07.183 "fast_io_fail_timeout_sec": 0, 00:19:07.183 "disable_auto_failback": false, 00:19:07.183 "generate_uuids": false, 00:19:07.183 "transport_tos": 0, 00:19:07.183 "nvme_error_stat": false, 00:19:07.183 "rdma_srq_size": 0, 00:19:07.183 "io_path_stat": false, 00:19:07.183 "allow_accel_sequence": false, 00:19:07.183 "rdma_max_cq_size": 0, 00:19:07.183 "rdma_cm_event_timeout_ms": 0, 00:19:07.183 "dhchap_digests": [ 00:19:07.183 "sha256", 00:19:07.183 "sha384", 00:19:07.183 "sha512" 00:19:07.183 ], 00:19:07.183 "dhchap_dhgroups": [ 00:19:07.183 "null", 00:19:07.183 "ffdhe2048", 00:19:07.183 "ffdhe3072", 00:19:07.183 "ffdhe4096", 00:19:07.183 "ffdhe6144", 00:19:07.183 "ffdhe8192" 00:19:07.183 ] 00:19:07.183 } 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "method": "bdev_nvme_attach_controller", 00:19:07.183 "params": { 00:19:07.183 "name": "TLSTEST", 00:19:07.183 "trtype": "TCP", 00:19:07.183 "adrfam": "IPv4", 00:19:07.183 "traddr": "10.0.0.2", 00:19:07.183 "trsvcid": "4420", 00:19:07.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.183 "prchk_reftag": false, 00:19:07.183 "prchk_guard": false, 00:19:07.183 "ctrlr_loss_timeout_sec": 0, 00:19:07.183 "reconnect_delay_sec": 0, 00:19:07.183 "fast_io_fail_timeout_sec": 0, 00:19:07.183 "psk": "key0", 00:19:07.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.183 "hdgst": false, 00:19:07.183 "ddgst": false, 00:19:07.183 "multipath": "multipath" 00:19:07.183 } 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "method": "bdev_nvme_set_hotplug", 00:19:07.183 "params": { 00:19:07.183 "period_us": 100000, 00:19:07.183 "enable": false 00:19:07.183 } 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "method": "bdev_wait_for_examine" 00:19:07.183 } 00:19:07.183 ] 00:19:07.183 }, 00:19:07.183 { 00:19:07.183 "subsystem": "nbd", 00:19:07.183 "config": [] 00:19:07.183 } 00:19:07.183 ] 00:19:07.183 }' 00:19:07.183 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.183 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.183 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.441 [2024-11-20 09:09:44.250001] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:07.441 [2024-11-20 09:09:44.250084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352235 ] 00:19:07.441 [2024-11-20 09:09:44.315195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.441 [2024-11-20 09:09:44.372310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.699 [2024-11-20 09:09:44.554952] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.699 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.699 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:07.699 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:07.958 Running I/O for 10 seconds... 00:19:09.828 3096.00 IOPS, 12.09 MiB/s [2024-11-20T08:09:48.230Z] 3157.00 IOPS, 12.33 MiB/s [2024-11-20T08:09:49.164Z] 3163.67 IOPS, 12.36 MiB/s [2024-11-20T08:09:50.097Z] 3184.75 IOPS, 12.44 MiB/s [2024-11-20T08:09:51.031Z] 3192.00 IOPS, 12.47 MiB/s [2024-11-20T08:09:51.964Z] 3181.50 IOPS, 12.43 MiB/s [2024-11-20T08:09:52.898Z] 3169.00 IOPS, 12.38 MiB/s [2024-11-20T08:09:53.832Z] 3167.62 IOPS, 12.37 MiB/s [2024-11-20T08:09:55.206Z] 3177.00 IOPS, 12.41 MiB/s [2024-11-20T08:09:55.206Z] 3179.90 IOPS, 12.42 MiB/s 00:19:18.177 Latency(us) 00:19:18.177 [2024-11-20T08:09:55.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.177 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.177 Verification LBA range: start 0x0 length 0x2000 00:19:18.177 TLSTESTn1 : 10.02 3186.24 12.45 0.00 0.00 40108.74 7524.50 49321.91 00:19:18.177 [2024-11-20T08:09:55.206Z] =================================================================================================================== 00:19:18.177 [2024-11-20T08:09:55.206Z] Total : 3186.24 12.45 0.00 0.00 40108.74 7524.50 49321.91 00:19:18.177 { 00:19:18.177 "results": [ 00:19:18.177 { 00:19:18.177 "job": "TLSTESTn1", 00:19:18.177 "core_mask": "0x4", 00:19:18.177 "workload": "verify", 00:19:18.177 "status": "finished", 00:19:18.177 "verify_range": { 00:19:18.177 "start": 0, 00:19:18.177 "length": 8192 00:19:18.177 }, 00:19:18.177 "queue_depth": 128, 00:19:18.177 "io_size": 4096, 00:19:18.177 "runtime": 10.01996, 00:19:18.177 "iops": 3186.2402644321933, 00:19:18.177 "mibps": 12.446251032938255, 00:19:18.177 "io_failed": 0, 00:19:18.177 "io_timeout": 0, 00:19:18.177 "avg_latency_us": 40108.73647235157, 00:19:18.177 "min_latency_us": 7524.503703703704, 00:19:18.177 "max_latency_us": 49321.90814814815 00:19:18.177 } 00:19:18.177 ], 00:19:18.177 "core_count": 1 00:19:18.177 } 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3352235 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3352235 ']' 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3352235 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3352235 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3352235' 00:19:18.177 killing process with pid 3352235 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3352235 00:19:18.177 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.177 00:19:18.177 Latency(us) 00:19:18.177 [2024-11-20T08:09:55.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.177 [2024-11-20T08:09:55.206Z] =================================================================================================================== 00:19:18.177 [2024-11-20T08:09:55.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.177 09:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3352235 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3352085 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3352085 ']' 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3352085 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3352085 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3352085' 00:19:18.177 killing process with pid 3352085 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3352085 00:19:18.177 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3352085 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3353469 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3353469 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3353469 ']' 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:18.436 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.436 [2024-11-20 09:09:55.396222] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:18.436 [2024-11-20 09:09:55.396323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.695 [2024-11-20 09:09:55.471046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.695 [2024-11-20 09:09:55.529610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.695 [2024-11-20 09:09:55.529680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.695 [2024-11-20 09:09:55.529694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.695 [2024-11-20 09:09:55.529720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.695 [2024-11-20 09:09:55.529731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.695 [2024-11-20 09:09:55.530353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.695 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:18.695 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:18.695 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.695 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.695 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.695 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.695 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.PrAtAO5Ubf 00:19:18.695 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PrAtAO5Ubf 00:19:18.695 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:18.954 [2024-11-20 09:09:55.956839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.954 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.520 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.520 [2024-11-20 09:09:56.546459] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.520 [2024-11-20 09:09:56.546726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.778 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.036 malloc0 00:19:20.036 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.294 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PrAtAO5Ubf 00:19:20.552 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3353817 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3353817 /var/tmp/bdevperf.sock 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3353817 ']' 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:20.810 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.810 [2024-11-20 09:09:57.693404] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:20.810 [2024-11-20 09:09:57.693490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353817 ] 00:19:20.810 [2024-11-20 09:09:57.758713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.810 [2024-11-20 09:09:57.815780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.068 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:21.068 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:21.068 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PrAtAO5Ubf 00:19:21.326 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:21.583 [2024-11-20 09:09:58.447562] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.583 nvme0n1 00:19:21.583 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:21.842 Running I/O for 1 seconds... 00:19:22.774 3418.00 IOPS, 13.35 MiB/s 00:19:22.774 Latency(us) 00:19:22.774 [2024-11-20T08:09:59.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.774 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:22.774 Verification LBA range: start 0x0 length 0x2000 00:19:22.774 nvme0n1 : 1.02 3481.86 13.60 0.00 0.00 36451.15 6650.69 33399.09 00:19:22.774 [2024-11-20T08:09:59.803Z] =================================================================================================================== 00:19:22.774 [2024-11-20T08:09:59.803Z] Total : 3481.86 13.60 0.00 0.00 36451.15 6650.69 33399.09 00:19:22.774 { 00:19:22.774 "results": [ 00:19:22.774 { 00:19:22.774 "job": "nvme0n1", 00:19:22.774 "core_mask": "0x2", 00:19:22.774 "workload": "verify", 00:19:22.774 "status": "finished", 00:19:22.774 "verify_range": { 00:19:22.774 "start": 0, 00:19:22.774 "length": 8192 00:19:22.774 }, 00:19:22.774 "queue_depth": 128, 00:19:22.774 "io_size": 4096, 00:19:22.774 "runtime": 1.018708, 00:19:22.774 "iops": 3481.861338087067, 00:19:22.774 "mibps": 13.601020851902605, 00:19:22.774 "io_failed": 0, 00:19:22.774 "io_timeout": 0, 00:19:22.774 "avg_latency_us": 36451.15207802107, 00:19:22.774 "min_latency_us": 6650.69037037037, 00:19:22.774 "max_latency_us": 33399.08740740741 00:19:22.774 } 00:19:22.774 ], 00:19:22.774 "core_count": 1 00:19:22.774 } 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3353817 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3353817 ']' 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3353817 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3353817 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3353817' 00:19:22.774 killing process with pid 3353817 00:19:22.774 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3353817 00:19:22.774 Received shutdown signal, test time was about 1.000000 seconds 00:19:22.774 00:19:22.774 Latency(us) 00:19:22.775 [2024-11-20T08:09:59.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.775 [2024-11-20T08:09:59.804Z] =================================================================================================================== 00:19:22.775 [2024-11-20T08:09:59.804Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.775 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3353817 00:19:23.032 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3353469 00:19:23.032 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3353469 ']' 00:19:23.032 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3353469 00:19:23.032 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:23.032 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:23.032 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3353469 00:19:23.032 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:23.032 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:23.032 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3353469' 00:19:23.033 killing process with pid 3353469 00:19:23.033 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3353469 00:19:23.033 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3353469 00:19:23.290 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3354130 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3354130 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3354130 ']' 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.291 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.291 [2024-11-20 09:10:00.267823] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:23.291 [2024-11-20 09:10:00.267926] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.549 [2024-11-20 09:10:00.338172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.549 [2024-11-20 09:10:00.390911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.549 [2024-11-20 09:10:00.390971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.549 [2024-11-20 09:10:00.390999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.549 [2024-11-20 09:10:00.391009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.549 [2024-11-20 09:10:00.391018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.549 [2024-11-20 09:10:00.391600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.549 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:23.549 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:23.549 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.549 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.549 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.549 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.549 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:23.549 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.550 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.550 [2024-11-20 09:10:00.532723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.550 malloc0 00:19:23.550 [2024-11-20 09:10:00.564522] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.550 [2024-11-20 09:10:00.564770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3354155 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3354155 /var/tmp/bdevperf.sock 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3354155 ']' 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.807 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.807 [2024-11-20 09:10:00.640690] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:23.807 [2024-11-20 09:10:00.640758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354155 ] 00:19:23.807 [2024-11-20 09:10:00.706265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.808 [2024-11-20 09:10:00.767379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.065 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:24.065 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:24.065 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PrAtAO5Ubf 00:19:24.322 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:24.580 [2024-11-20 09:10:01.423834] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.580 nvme0n1 00:19:24.580 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.838 Running I/O for 1 seconds... 00:19:25.770 3293.00 IOPS, 12.86 MiB/s 00:19:25.770 Latency(us) 00:19:25.770 [2024-11-20T08:10:02.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.770 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:25.770 Verification LBA range: start 0x0 length 0x2000 00:19:25.770 nvme0n1 : 1.03 3333.52 13.02 0.00 0.00 37900.94 6262.33 53982.25 00:19:25.770 [2024-11-20T08:10:02.799Z] =================================================================================================================== 00:19:25.770 [2024-11-20T08:10:02.799Z] Total : 3333.52 13.02 0.00 0.00 37900.94 6262.33 53982.25 00:19:25.770 { 00:19:25.770 "results": [ 00:19:25.770 { 00:19:25.770 "job": "nvme0n1", 00:19:25.770 "core_mask": "0x2", 00:19:25.770 "workload": "verify", 00:19:25.770 "status": "finished", 00:19:25.770 "verify_range": { 00:19:25.770 "start": 0, 00:19:25.770 "length": 8192 00:19:25.770 }, 00:19:25.770 "queue_depth": 128, 00:19:25.770 "io_size": 4096, 00:19:25.770 "runtime": 1.026244, 00:19:25.770 "iops": 3333.5152263984005, 00:19:25.770 "mibps": 13.021543853118752, 00:19:25.770 "io_failed": 0, 00:19:25.770 "io_timeout": 0, 00:19:25.770 "avg_latency_us": 37900.940268277634, 00:19:25.770 "min_latency_us": 6262.328888888889, 00:19:25.770 "max_latency_us": 53982.24592592593 00:19:25.770 } 00:19:25.770 ], 00:19:25.770 "core_count": 1 00:19:25.770 } 00:19:25.770 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:25.770 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.770 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.770 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.770 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:25.770 "subsystems": [ 00:19:25.770 { 00:19:25.770 "subsystem": "keyring", 00:19:25.770 "config": [ 00:19:25.770 { 00:19:25.770 "method": "keyring_file_add_key", 00:19:25.770 "params": { 00:19:25.770 "name": "key0", 00:19:25.770 "path": "/tmp/tmp.PrAtAO5Ubf" 00:19:25.770 } 00:19:25.770 } 00:19:25.770 ] 00:19:25.770 }, 00:19:25.770 { 00:19:25.770 "subsystem": "iobuf", 00:19:25.770 "config": [ 00:19:25.770 { 00:19:25.770 "method": "iobuf_set_options", 00:19:25.770 "params": { 00:19:25.770 "small_pool_count": 8192, 00:19:25.770 "large_pool_count": 1024, 00:19:25.770 "small_bufsize": 8192, 00:19:25.770 "large_bufsize": 135168, 00:19:25.770 "enable_numa": false 00:19:25.770 } 00:19:25.770 } 00:19:25.770 ] 00:19:25.770 }, 00:19:25.770 { 00:19:25.770 "subsystem": "sock", 00:19:25.770 "config": [ 00:19:25.770 { 00:19:25.770 "method": "sock_set_default_impl", 00:19:25.770 "params": { 00:19:25.770 "impl_name": "posix" 00:19:25.770 } 00:19:25.770 }, 00:19:25.770 { 00:19:25.770 "method": "sock_impl_set_options", 00:19:25.770 "params": { 00:19:25.770 "impl_name": "ssl", 00:19:25.770 "recv_buf_size": 4096, 00:19:25.770 "send_buf_size": 4096, 00:19:25.770 "enable_recv_pipe": true, 00:19:25.770 "enable_quickack": false, 00:19:25.770 "enable_placement_id": 0, 00:19:25.770 "enable_zerocopy_send_server": true, 00:19:25.770 "enable_zerocopy_send_client": false, 00:19:25.770 "zerocopy_threshold": 0, 00:19:25.770 "tls_version": 0, 00:19:25.770 "enable_ktls": false 00:19:25.770 } 00:19:25.770 }, 00:19:25.770 { 00:19:25.770 "method": "sock_impl_set_options", 00:19:25.770 "params": { 00:19:25.770 "impl_name": "posix", 00:19:25.770 "recv_buf_size": 2097152, 00:19:25.770 "send_buf_size": 2097152, 00:19:25.770 "enable_recv_pipe": true, 00:19:25.770 "enable_quickack": false, 00:19:25.770 "enable_placement_id": 0, 00:19:25.770 "enable_zerocopy_send_server": true, 00:19:25.770 "enable_zerocopy_send_client": false, 00:19:25.770 "zerocopy_threshold": 0, 00:19:25.770 "tls_version": 0, 00:19:25.770 "enable_ktls": false 00:19:25.770 } 00:19:25.770 } 00:19:25.770 ] 00:19:25.770 }, 00:19:25.770 { 00:19:25.770 "subsystem": "vmd", 00:19:25.770 "config": [] 00:19:25.770 }, 00:19:25.770 { 00:19:25.770 "subsystem": "accel", 00:19:25.770 "config": [ 00:19:25.770 { 00:19:25.770 "method": "accel_set_options", 00:19:25.770 "params": { 00:19:25.770 "small_cache_size": 128, 00:19:25.770 "large_cache_size": 16, 00:19:25.770 "task_count": 2048, 00:19:25.770 "sequence_count": 2048, 00:19:25.770 "buf_count": 2048 00:19:25.770 } 00:19:25.770 } 00:19:25.770 ] 00:19:25.770 }, 00:19:25.770 { 00:19:25.770 "subsystem": "bdev", 00:19:25.770 "config": [ 00:19:25.770 { 00:19:25.770 "method": "bdev_set_options", 00:19:25.770 "params": { 00:19:25.770 "bdev_io_pool_size": 65535, 00:19:25.770 "bdev_io_cache_size": 256, 00:19:25.770 "bdev_auto_examine": true, 00:19:25.770 "iobuf_small_cache_size": 128, 00:19:25.770 "iobuf_large_cache_size": 16 00:19:25.770 } 00:19:25.770 }, 00:19:25.770 { 00:19:25.770 "method": "bdev_raid_set_options", 00:19:25.770 "params": { 00:19:25.770 "process_window_size_kb": 1024, 00:19:25.770 "process_max_bandwidth_mb_sec": 0 00:19:25.770 } 00:19:25.770 }, 00:19:25.770 { 00:19:25.770 "method": "bdev_iscsi_set_options", 00:19:25.770 "params": { 00:19:25.770 "timeout_sec": 30 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "bdev_nvme_set_options", 00:19:25.771 "params": { 00:19:25.771 "action_on_timeout": "none", 00:19:25.771 "timeout_us": 0, 00:19:25.771 "timeout_admin_us": 0, 00:19:25.771 "keep_alive_timeout_ms": 10000, 00:19:25.771 "arbitration_burst": 0, 00:19:25.771 "low_priority_weight": 0, 00:19:25.771 "medium_priority_weight": 0, 00:19:25.771 "high_priority_weight": 0, 00:19:25.771 "nvme_adminq_poll_period_us": 10000, 00:19:25.771 "nvme_ioq_poll_period_us": 0, 00:19:25.771 "io_queue_requests": 0, 00:19:25.771 "delay_cmd_submit": true, 00:19:25.771 "transport_retry_count": 4, 00:19:25.771 "bdev_retry_count": 3, 00:19:25.771 "transport_ack_timeout": 0, 00:19:25.771 "ctrlr_loss_timeout_sec": 0, 00:19:25.771 "reconnect_delay_sec": 0, 00:19:25.771 "fast_io_fail_timeout_sec": 0, 00:19:25.771 "disable_auto_failback": false, 00:19:25.771 "generate_uuids": false, 00:19:25.771 "transport_tos": 0, 00:19:25.771 "nvme_error_stat": false, 00:19:25.771 "rdma_srq_size": 0, 00:19:25.771 "io_path_stat": false, 00:19:25.771 "allow_accel_sequence": false, 00:19:25.771 "rdma_max_cq_size": 0, 00:19:25.771 "rdma_cm_event_timeout_ms": 0, 00:19:25.771 "dhchap_digests": [ 00:19:25.771 "sha256", 00:19:25.771 "sha384", 00:19:25.771 "sha512" 00:19:25.771 ], 00:19:25.771 "dhchap_dhgroups": [ 00:19:25.771 "null", 00:19:25.771 "ffdhe2048", 00:19:25.771 "ffdhe3072", 00:19:25.771 "ffdhe4096", 00:19:25.771 "ffdhe6144", 00:19:25.771 "ffdhe8192" 00:19:25.771 ] 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "bdev_nvme_set_hotplug", 00:19:25.771 "params": { 00:19:25.771 "period_us": 100000, 00:19:25.771 "enable": false 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "bdev_malloc_create", 00:19:25.771 "params": { 00:19:25.771 "name": "malloc0", 00:19:25.771 "num_blocks": 8192, 00:19:25.771 "block_size": 4096, 00:19:25.771 "physical_block_size": 4096, 00:19:25.771 "uuid": "99e4e0d9-289c-4a66-8654-cefd8fab6d89", 00:19:25.771 "optimal_io_boundary": 0, 00:19:25.771 "md_size": 0, 00:19:25.771 "dif_type": 0, 00:19:25.771 "dif_is_head_of_md": false, 00:19:25.771 "dif_pi_format": 0 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "bdev_wait_for_examine" 00:19:25.771 } 00:19:25.771 ] 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "subsystem": "nbd", 00:19:25.771 "config": [] 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "subsystem": "scheduler", 00:19:25.771 "config": [ 00:19:25.771 { 00:19:25.771 "method": "framework_set_scheduler", 00:19:25.771 "params": { 00:19:25.771 "name": "static" 00:19:25.771 } 00:19:25.771 } 00:19:25.771 ] 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "subsystem": "nvmf", 00:19:25.771 "config": [ 00:19:25.771 { 00:19:25.771 "method": "nvmf_set_config", 00:19:25.771 "params": { 00:19:25.771 "discovery_filter": "match_any", 00:19:25.771 "admin_cmd_passthru": { 00:19:25.771 "identify_ctrlr": false 00:19:25.771 }, 00:19:25.771 "dhchap_digests": [ 00:19:25.771 "sha256", 00:19:25.771 "sha384", 00:19:25.771 "sha512" 00:19:25.771 ], 00:19:25.771 "dhchap_dhgroups": [ 00:19:25.771 "null", 00:19:25.771 "ffdhe2048", 00:19:25.771 "ffdhe3072", 00:19:25.771 "ffdhe4096", 00:19:25.771 "ffdhe6144", 00:19:25.771 "ffdhe8192" 00:19:25.771 ] 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "nvmf_set_max_subsystems", 00:19:25.771 "params": { 00:19:25.771 "max_subsystems": 1024 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "nvmf_set_crdt", 00:19:25.771 "params": { 00:19:25.771 "crdt1": 0, 00:19:25.771 "crdt2": 0, 00:19:25.771 "crdt3": 0 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "nvmf_create_transport", 00:19:25.771 "params": { 00:19:25.771 "trtype": "TCP", 00:19:25.771 "max_queue_depth": 128, 00:19:25.771 "max_io_qpairs_per_ctrlr": 127, 00:19:25.771 "in_capsule_data_size": 4096, 00:19:25.771 "max_io_size": 131072, 00:19:25.771 "io_unit_size": 131072, 00:19:25.771 "max_aq_depth": 128, 00:19:25.771 "num_shared_buffers": 511, 00:19:25.771 "buf_cache_size": 4294967295, 00:19:25.771 "dif_insert_or_strip": false, 00:19:25.771 "zcopy": false, 00:19:25.771 "c2h_success": false, 00:19:25.771 "sock_priority": 0, 00:19:25.771 "abort_timeout_sec": 1, 00:19:25.771 "ack_timeout": 0, 00:19:25.771 "data_wr_pool_size": 0 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "nvmf_create_subsystem", 00:19:25.771 "params": { 00:19:25.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.771 "allow_any_host": false, 00:19:25.771 "serial_number": "00000000000000000000", 00:19:25.771 "model_number": "SPDK bdev Controller", 00:19:25.771 "max_namespaces": 32, 00:19:25.771 "min_cntlid": 1, 00:19:25.771 "max_cntlid": 65519, 00:19:25.771 "ana_reporting": false 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "nvmf_subsystem_add_host", 00:19:25.771 "params": { 00:19:25.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.771 "host": "nqn.2016-06.io.spdk:host1", 00:19:25.771 "psk": "key0" 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "nvmf_subsystem_add_ns", 00:19:25.771 "params": { 00:19:25.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.771 "namespace": { 00:19:25.771 "nsid": 1, 00:19:25.771 "bdev_name": "malloc0", 00:19:25.771 "nguid": "99E4E0D9289C4A668654CEFD8FAB6D89", 00:19:25.771 "uuid": "99e4e0d9-289c-4a66-8654-cefd8fab6d89", 00:19:25.771 "no_auto_visible": false 00:19:25.771 } 00:19:25.771 } 00:19:25.771 }, 00:19:25.771 { 00:19:25.771 "method": "nvmf_subsystem_add_listener", 00:19:25.771 "params": { 00:19:25.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.771 "listen_address": { 00:19:25.771 "trtype": "TCP", 00:19:25.771 "adrfam": "IPv4", 00:19:25.771 "traddr": "10.0.0.2", 00:19:25.771 "trsvcid": "4420" 00:19:25.771 }, 00:19:25.771 "secure_channel": false, 00:19:25.771 "sock_impl": "ssl" 00:19:25.771 } 00:19:25.771 } 00:19:25.771 ] 00:19:25.771 } 00:19:25.771 ] 00:19:25.771 }' 00:19:25.771 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:26.338 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:26.338 "subsystems": [ 00:19:26.338 { 00:19:26.338 "subsystem": "keyring", 00:19:26.338 "config": [ 00:19:26.338 { 00:19:26.338 "method": "keyring_file_add_key", 00:19:26.338 "params": { 00:19:26.338 "name": "key0", 00:19:26.338 "path": "/tmp/tmp.PrAtAO5Ubf" 00:19:26.338 } 00:19:26.338 } 00:19:26.338 ] 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "subsystem": "iobuf", 00:19:26.338 "config": [ 00:19:26.338 { 00:19:26.338 "method": "iobuf_set_options", 00:19:26.338 "params": { 00:19:26.338 "small_pool_count": 8192, 00:19:26.338 "large_pool_count": 1024, 00:19:26.338 "small_bufsize": 8192, 00:19:26.338 "large_bufsize": 135168, 00:19:26.338 "enable_numa": false 00:19:26.338 } 00:19:26.338 } 00:19:26.338 ] 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "subsystem": "sock", 00:19:26.338 "config": [ 00:19:26.338 { 00:19:26.338 "method": "sock_set_default_impl", 00:19:26.338 "params": { 00:19:26.338 "impl_name": "posix" 00:19:26.338 } 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "method": "sock_impl_set_options", 00:19:26.338 "params": { 00:19:26.338 "impl_name": "ssl", 00:19:26.338 "recv_buf_size": 4096, 00:19:26.338 "send_buf_size": 4096, 00:19:26.338 "enable_recv_pipe": true, 00:19:26.338 "enable_quickack": false, 00:19:26.338 "enable_placement_id": 0, 00:19:26.338 "enable_zerocopy_send_server": true, 00:19:26.338 "enable_zerocopy_send_client": false, 00:19:26.338 "zerocopy_threshold": 0, 00:19:26.338 "tls_version": 0, 00:19:26.338 "enable_ktls": false 00:19:26.338 } 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "method": "sock_impl_set_options", 00:19:26.338 "params": { 00:19:26.338 "impl_name": "posix", 00:19:26.338 "recv_buf_size": 2097152, 00:19:26.338 "send_buf_size": 2097152, 00:19:26.338 "enable_recv_pipe": true, 00:19:26.338 "enable_quickack": false, 00:19:26.338 "enable_placement_id": 0, 00:19:26.338 "enable_zerocopy_send_server": true, 00:19:26.338 "enable_zerocopy_send_client": false, 00:19:26.338 "zerocopy_threshold": 0, 00:19:26.338 "tls_version": 0, 00:19:26.338 "enable_ktls": false 00:19:26.338 } 00:19:26.338 } 00:19:26.338 ] 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "subsystem": "vmd", 00:19:26.338 "config": [] 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "subsystem": "accel", 00:19:26.338 "config": [ 00:19:26.338 { 00:19:26.338 "method": "accel_set_options", 00:19:26.338 "params": { 00:19:26.338 "small_cache_size": 128, 00:19:26.338 "large_cache_size": 16, 00:19:26.338 "task_count": 2048, 00:19:26.338 "sequence_count": 2048, 00:19:26.338 "buf_count": 2048 00:19:26.338 } 00:19:26.338 } 00:19:26.338 ] 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "subsystem": "bdev", 00:19:26.338 "config": [ 00:19:26.338 { 00:19:26.338 "method": "bdev_set_options", 00:19:26.338 "params": { 00:19:26.338 "bdev_io_pool_size": 65535, 00:19:26.338 "bdev_io_cache_size": 256, 00:19:26.338 "bdev_auto_examine": true, 00:19:26.338 "iobuf_small_cache_size": 128, 00:19:26.338 "iobuf_large_cache_size": 16 00:19:26.338 } 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "method": "bdev_raid_set_options", 00:19:26.338 "params": { 00:19:26.338 "process_window_size_kb": 1024, 00:19:26.338 "process_max_bandwidth_mb_sec": 0 00:19:26.338 } 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "method": "bdev_iscsi_set_options", 00:19:26.338 "params": { 00:19:26.338 "timeout_sec": 30 00:19:26.338 } 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "method": "bdev_nvme_set_options", 00:19:26.338 "params": { 00:19:26.338 "action_on_timeout": "none", 00:19:26.338 "timeout_us": 0, 00:19:26.338 "timeout_admin_us": 0, 00:19:26.338 "keep_alive_timeout_ms": 10000, 00:19:26.338 "arbitration_burst": 0, 00:19:26.338 "low_priority_weight": 0, 00:19:26.338 "medium_priority_weight": 0, 00:19:26.338 "high_priority_weight": 0, 00:19:26.338 "nvme_adminq_poll_period_us": 10000, 00:19:26.338 "nvme_ioq_poll_period_us": 0, 00:19:26.338 "io_queue_requests": 512, 00:19:26.338 "delay_cmd_submit": true, 00:19:26.338 "transport_retry_count": 4, 00:19:26.338 "bdev_retry_count": 3, 00:19:26.338 "transport_ack_timeout": 0, 00:19:26.338 "ctrlr_loss_timeout_sec": 0, 00:19:26.338 "reconnect_delay_sec": 0, 00:19:26.338 "fast_io_fail_timeout_sec": 0, 00:19:26.338 "disable_auto_failback": false, 00:19:26.338 "generate_uuids": false, 00:19:26.338 "transport_tos": 0, 00:19:26.338 "nvme_error_stat": false, 00:19:26.338 "rdma_srq_size": 0, 00:19:26.338 "io_path_stat": false, 00:19:26.338 "allow_accel_sequence": false, 00:19:26.338 "rdma_max_cq_size": 0, 00:19:26.338 "rdma_cm_event_timeout_ms": 0, 00:19:26.338 "dhchap_digests": [ 00:19:26.338 "sha256", 00:19:26.338 "sha384", 00:19:26.338 "sha512" 00:19:26.338 ], 00:19:26.338 "dhchap_dhgroups": [ 00:19:26.338 "null", 00:19:26.338 "ffdhe2048", 00:19:26.338 "ffdhe3072", 00:19:26.338 "ffdhe4096", 00:19:26.338 "ffdhe6144", 00:19:26.338 "ffdhe8192" 00:19:26.338 ] 00:19:26.338 } 00:19:26.338 }, 00:19:26.338 { 00:19:26.338 "method": "bdev_nvme_attach_controller", 00:19:26.338 "params": { 00:19:26.338 "name": "nvme0", 00:19:26.338 "trtype": "TCP", 00:19:26.338 "adrfam": "IPv4", 00:19:26.338 "traddr": "10.0.0.2", 00:19:26.338 "trsvcid": "4420", 00:19:26.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.338 "prchk_reftag": false, 00:19:26.338 "prchk_guard": false, 00:19:26.338 "ctrlr_loss_timeout_sec": 0, 00:19:26.338 "reconnect_delay_sec": 0, 00:19:26.338 "fast_io_fail_timeout_sec": 0, 00:19:26.338 "psk": "key0", 00:19:26.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.339 "hdgst": false, 00:19:26.339 "ddgst": false, 00:19:26.339 "multipath": "multipath" 00:19:26.339 } 00:19:26.339 }, 00:19:26.339 { 00:19:26.339 "method": "bdev_nvme_set_hotplug", 00:19:26.339 "params": { 00:19:26.339 "period_us": 100000, 00:19:26.339 "enable": false 00:19:26.339 } 00:19:26.339 }, 00:19:26.339 { 00:19:26.339 "method": "bdev_enable_histogram", 00:19:26.339 "params": { 00:19:26.339 "name": "nvme0n1", 00:19:26.339 "enable": true 00:19:26.339 } 00:19:26.339 }, 00:19:26.339 { 00:19:26.339 "method": "bdev_wait_for_examine" 00:19:26.339 } 00:19:26.339 ] 00:19:26.339 }, 00:19:26.339 { 00:19:26.339 "subsystem": "nbd", 00:19:26.339 "config": [] 00:19:26.339 } 00:19:26.339 ] 00:19:26.339 }' 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3354155 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3354155 ']' 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3354155 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3354155 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3354155' 00:19:26.339 killing process with pid 3354155 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3354155 00:19:26.339 Received shutdown signal, test time was about 1.000000 seconds 00:19:26.339 00:19:26.339 Latency(us) 00:19:26.339 [2024-11-20T08:10:03.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.339 [2024-11-20T08:10:03.368Z] =================================================================================================================== 00:19:26.339 [2024-11-20T08:10:03.368Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3354155 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3354130 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3354130 ']' 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3354130 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:26.339 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3354130 00:19:26.598 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:26.598 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:26.598 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3354130' 00:19:26.598 killing process with pid 3354130 00:19:26.598 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3354130 00:19:26.598 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3354130 00:19:26.598 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:26.599 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.599 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:26.599 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:26.599 "subsystems": [ 00:19:26.599 { 00:19:26.599 "subsystem": "keyring", 00:19:26.599 "config": [ 00:19:26.599 { 00:19:26.599 "method": "keyring_file_add_key", 00:19:26.599 "params": { 00:19:26.599 "name": "key0", 00:19:26.599 "path": "/tmp/tmp.PrAtAO5Ubf" 00:19:26.599 } 00:19:26.599 } 00:19:26.599 ] 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "subsystem": "iobuf", 00:19:26.599 "config": [ 00:19:26.599 { 00:19:26.599 "method": "iobuf_set_options", 00:19:26.599 "params": { 00:19:26.599 "small_pool_count": 8192, 00:19:26.599 "large_pool_count": 1024, 00:19:26.599 "small_bufsize": 8192, 00:19:26.599 "large_bufsize": 135168, 00:19:26.599 "enable_numa": false 00:19:26.599 } 00:19:26.599 } 00:19:26.599 ] 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "subsystem": "sock", 00:19:26.599 "config": [ 00:19:26.599 { 00:19:26.599 "method": "sock_set_default_impl", 00:19:26.599 "params": { 00:19:26.599 "impl_name": "posix" 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "sock_impl_set_options", 00:19:26.599 "params": { 00:19:26.599 "impl_name": "ssl", 00:19:26.599 "recv_buf_size": 4096, 00:19:26.599 "send_buf_size": 4096, 00:19:26.599 "enable_recv_pipe": true, 00:19:26.599 "enable_quickack": false, 00:19:26.599 "enable_placement_id": 0, 00:19:26.599 "enable_zerocopy_send_server": true, 00:19:26.599 "enable_zerocopy_send_client": false, 00:19:26.599 "zerocopy_threshold": 0, 00:19:26.599 "tls_version": 0, 00:19:26.599 "enable_ktls": false 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "sock_impl_set_options", 00:19:26.599 "params": { 00:19:26.599 "impl_name": "posix", 00:19:26.599 "recv_buf_size": 2097152, 00:19:26.599 "send_buf_size": 2097152, 00:19:26.599 "enable_recv_pipe": true, 00:19:26.599 "enable_quickack": false, 00:19:26.599 "enable_placement_id": 0, 00:19:26.599 "enable_zerocopy_send_server": true, 00:19:26.599 "enable_zerocopy_send_client": false, 00:19:26.599 "zerocopy_threshold": 0, 00:19:26.599 "tls_version": 0, 00:19:26.599 "enable_ktls": false 00:19:26.599 } 00:19:26.599 } 00:19:26.599 ] 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "subsystem": "vmd", 00:19:26.599 "config": [] 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "subsystem": "accel", 00:19:26.599 "config": [ 00:19:26.599 { 00:19:26.599 "method": "accel_set_options", 00:19:26.599 "params": { 00:19:26.599 "small_cache_size": 128, 00:19:26.599 "large_cache_size": 16, 00:19:26.599 "task_count": 2048, 00:19:26.599 "sequence_count": 2048, 00:19:26.599 "buf_count": 2048 00:19:26.599 } 00:19:26.599 } 00:19:26.599 ] 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "subsystem": "bdev", 00:19:26.599 "config": [ 00:19:26.599 { 00:19:26.599 "method": "bdev_set_options", 00:19:26.599 "params": { 00:19:26.599 "bdev_io_pool_size": 65535, 00:19:26.599 "bdev_io_cache_size": 256, 00:19:26.599 "bdev_auto_examine": true, 00:19:26.599 "iobuf_small_cache_size": 128, 00:19:26.599 "iobuf_large_cache_size": 16 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "bdev_raid_set_options", 00:19:26.599 "params": { 00:19:26.599 "process_window_size_kb": 1024, 00:19:26.599 "process_max_bandwidth_mb_sec": 0 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "bdev_iscsi_set_options", 00:19:26.599 "params": { 00:19:26.599 "timeout_sec": 30 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "bdev_nvme_set_options", 00:19:26.599 "params": { 00:19:26.599 "action_on_timeout": "none", 00:19:26.599 "timeout_us": 0, 00:19:26.599 "timeout_admin_us": 0, 00:19:26.599 "keep_alive_timeout_ms": 10000, 00:19:26.599 "arbitration_burst": 0, 00:19:26.599 "low_priority_weight": 0, 00:19:26.599 "medium_priority_weight": 0, 00:19:26.599 "high_priority_weight": 0, 00:19:26.599 "nvme_adminq_poll_period_us": 10000, 00:19:26.599 "nvme_ioq_poll_period_us": 0, 00:19:26.599 "io_queue_requests": 0, 00:19:26.599 "delay_cmd_submit": true, 00:19:26.599 "transport_retry_count": 4, 00:19:26.599 "bdev_retry_count": 3, 00:19:26.599 "transport_ack_timeout": 0, 00:19:26.599 "ctrlr_loss_timeout_sec": 0, 00:19:26.599 "reconnect_delay_sec": 0, 00:19:26.599 "fast_io_fail_timeout_sec": 0, 00:19:26.599 "disable_auto_failback": false, 00:19:26.599 "generate_uuids": false, 00:19:26.599 "transport_tos": 0, 00:19:26.599 "nvme_error_stat": false, 00:19:26.599 "rdma_srq_size": 0, 00:19:26.599 "io_path_stat": false, 00:19:26.599 "allow_accel_sequence": false, 00:19:26.599 "rdma_max_cq_size": 0, 00:19:26.599 "rdma_cm_event_timeout_ms": 0, 00:19:26.599 "dhchap_digests": [ 00:19:26.599 "sha256", 00:19:26.599 "sha384", 00:19:26.599 "sha512" 00:19:26.599 ], 00:19:26.599 "dhchap_dhgroups": [ 00:19:26.599 "null", 00:19:26.599 "ffdhe2048", 00:19:26.599 "ffdhe3072", 00:19:26.599 "ffdhe4096", 00:19:26.599 "ffdhe6144", 00:19:26.599 "ffdhe8192" 00:19:26.599 ] 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "bdev_nvme_set_hotplug", 00:19:26.599 "params": { 00:19:26.599 "period_us": 100000, 00:19:26.599 "enable": false 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "bdev_malloc_create", 00:19:26.599 "params": { 00:19:26.599 "name": "malloc0", 00:19:26.599 "num_blocks": 8192, 00:19:26.599 "block_size": 4096, 00:19:26.599 "physical_block_size": 4096, 00:19:26.599 "uuid": "99e4e0d9-289c-4a66-8654-cefd8fab6d89", 00:19:26.599 "optimal_io_boundary": 0, 00:19:26.599 "md_size": 0, 00:19:26.599 "dif_type": 0, 00:19:26.599 "dif_is_head_of_md": false, 00:19:26.599 "dif_pi_format": 0 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "bdev_wait_for_examine" 00:19:26.599 } 00:19:26.599 ] 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "subsystem": "nbd", 00:19:26.599 "config": [] 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "subsystem": "scheduler", 00:19:26.599 "config": [ 00:19:26.599 { 00:19:26.599 "method": "framework_set_scheduler", 00:19:26.599 "params": { 00:19:26.599 "name": "static" 00:19:26.599 } 00:19:26.599 } 00:19:26.599 ] 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "subsystem": "nvmf", 00:19:26.599 "config": [ 00:19:26.599 { 00:19:26.599 "method": "nvmf_set_config", 00:19:26.599 "params": { 00:19:26.599 "discovery_filter": "match_any", 00:19:26.599 "admin_cmd_passthru": { 00:19:26.599 "identify_ctrlr": false 00:19:26.599 }, 00:19:26.599 "dhchap_digests": [ 00:19:26.599 "sha256", 00:19:26.599 "sha384", 00:19:26.599 "sha512" 00:19:26.599 ], 00:19:26.599 "dhchap_dhgroups": [ 00:19:26.599 "null", 00:19:26.599 "ffdhe2048", 00:19:26.599 "ffdhe3072", 00:19:26.599 "ffdhe4096", 00:19:26.599 "ffdhe6144", 00:19:26.599 "ffdhe8192" 00:19:26.599 ] 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "nvmf_set_max_subsystems", 00:19:26.599 "params": { 00:19:26.599 "max_subsystems": 1024 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "nvmf_set_crdt", 00:19:26.599 "params": { 00:19:26.599 "crdt1": 0, 00:19:26.599 "crdt2": 0, 00:19:26.599 "crdt3": 0 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "nvmf_create_transport", 00:19:26.599 "params": { 00:19:26.599 "trtype": "TCP", 00:19:26.599 "max_queue_depth": 128, 00:19:26.599 "max_io_qpairs_per_ctrlr": 127, 00:19:26.599 "in_capsule_data_size": 4096, 00:19:26.599 "max_io_size": 131072, 00:19:26.599 "io_unit_size": 131072, 00:19:26.599 "max_aq_depth": 128, 00:19:26.599 "num_shared_buffers": 511, 00:19:26.599 "buf_cache_size": 4294967295, 00:19:26.599 "dif_insert_or_strip": false, 00:19:26.599 "zcopy": false, 00:19:26.599 "c2h_success": false, 00:19:26.599 "sock_priority": 0, 00:19:26.599 "abort_timeout_sec": 1, 00:19:26.599 "ack_timeout": 0, 00:19:26.599 "data_wr_pool_size": 0 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.599 "method": "nvmf_create_subsystem", 00:19:26.599 "params": { 00:19:26.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.599 "allow_any_host": false, 00:19:26.599 "serial_number": "00000000000000000000", 00:19:26.599 "model_number": "SPDK bdev Controller", 00:19:26.599 "max_namespaces": 32, 00:19:26.599 "min_cntlid": 1, 00:19:26.599 "max_cntlid": 65519, 00:19:26.599 "ana_reporting": false 00:19:26.599 } 00:19:26.599 }, 00:19:26.599 { 00:19:26.600 "method": "nvmf_subsystem_add_host", 00:19:26.600 "params": { 00:19:26.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.600 "host": "nqn.2016-06.io.spdk:host1", 00:19:26.600 "psk": "key0" 00:19:26.600 } 00:19:26.600 }, 00:19:26.600 { 00:19:26.600 "method": "nvmf_subsystem_add_ns", 00:19:26.600 "params": { 00:19:26.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.600 "namespace": { 00:19:26.600 "nsid": 1, 00:19:26.600 "bdev_name": "malloc0", 00:19:26.600 "nguid": "99E4E0D9289C4A668654CEFD8FAB6D89", 00:19:26.600 "uuid": "99e4e0d9-289c-4a66-8654-cefd8fab6d89", 00:19:26.600 "no_auto_visible": false 00:19:26.600 } 00:19:26.600 } 00:19:26.600 }, 00:19:26.600 { 00:19:26.600 "method": "nvmf_subsystem_add_listener", 00:19:26.600 "params": { 00:19:26.600 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.600 "listen_address": { 00:19:26.600 "trtype": "TCP", 00:19:26.600 "adrfam": "IPv4", 00:19:26.600 "traddr": "10.0.0.2", 00:19:26.600 "trsvcid": "4420" 00:19:26.600 }, 00:19:26.600 "secure_channel": false, 00:19:26.600 "sock_impl": "ssl" 00:19:26.600 } 00:19:26.600 } 00:19:26.600 ] 00:19:26.600 } 00:19:26.600 ] 00:19:26.600 }' 00:19:26.600 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.858 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3354561 00:19:26.858 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:26.858 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3354561 00:19:26.858 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3354561 ']' 00:19:26.858 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.858 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:26.858 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.858 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:26.858 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.858 [2024-11-20 09:10:03.678300] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:26.858 [2024-11-20 09:10:03.678423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.858 [2024-11-20 09:10:03.747398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.858 [2024-11-20 09:10:03.798784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.858 [2024-11-20 09:10:03.798845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.858 [2024-11-20 09:10:03.798873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.858 [2024-11-20 09:10:03.798883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.858 [2024-11-20 09:10:03.798893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.858 [2024-11-20 09:10:03.799514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.117 [2024-11-20 09:10:04.043402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.117 [2024-11-20 09:10:04.075406] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.117 [2024-11-20 09:10:04.075646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.050 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.050 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:28.050 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.050 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:28.050 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.050 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.050 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3354713 00:19:28.051 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3354713 /var/tmp/bdevperf.sock 00:19:28.051 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3354713 ']' 00:19:28.051 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.051 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:28.051 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:28.051 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:28.051 "subsystems": [ 00:19:28.051 { 00:19:28.051 "subsystem": "keyring", 00:19:28.051 "config": [ 00:19:28.051 { 00:19:28.051 "method": "keyring_file_add_key", 00:19:28.051 "params": { 00:19:28.051 "name": "key0", 00:19:28.051 "path": "/tmp/tmp.PrAtAO5Ubf" 00:19:28.051 } 00:19:28.051 } 00:19:28.051 ] 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "subsystem": "iobuf", 00:19:28.051 "config": [ 00:19:28.051 { 00:19:28.051 "method": "iobuf_set_options", 00:19:28.051 "params": { 00:19:28.051 "small_pool_count": 8192, 00:19:28.051 "large_pool_count": 1024, 00:19:28.051 "small_bufsize": 8192, 00:19:28.051 "large_bufsize": 135168, 00:19:28.051 "enable_numa": false 00:19:28.051 } 00:19:28.051 } 00:19:28.051 ] 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "subsystem": "sock", 00:19:28.051 "config": [ 00:19:28.051 { 00:19:28.051 "method": "sock_set_default_impl", 00:19:28.051 "params": { 00:19:28.051 "impl_name": "posix" 00:19:28.051 } 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "method": "sock_impl_set_options", 00:19:28.051 "params": { 00:19:28.051 "impl_name": "ssl", 00:19:28.051 "recv_buf_size": 4096, 00:19:28.051 "send_buf_size": 4096, 00:19:28.051 "enable_recv_pipe": true, 00:19:28.051 "enable_quickack": false, 00:19:28.051 "enable_placement_id": 0, 00:19:28.051 "enable_zerocopy_send_server": true, 00:19:28.051 "enable_zerocopy_send_client": false, 00:19:28.051 "zerocopy_threshold": 0, 00:19:28.051 "tls_version": 0, 00:19:28.051 "enable_ktls": false 00:19:28.051 } 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "method": "sock_impl_set_options", 00:19:28.051 "params": { 00:19:28.051 "impl_name": "posix", 00:19:28.051 "recv_buf_size": 2097152, 00:19:28.051 "send_buf_size": 2097152, 00:19:28.051 "enable_recv_pipe": true, 00:19:28.051 "enable_quickack": false, 00:19:28.051 "enable_placement_id": 0, 00:19:28.051 "enable_zerocopy_send_server": true, 00:19:28.051 "enable_zerocopy_send_client": false, 00:19:28.051 "zerocopy_threshold": 0, 00:19:28.051 "tls_version": 0, 00:19:28.051 "enable_ktls": false 00:19:28.051 } 00:19:28.051 } 00:19:28.051 ] 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "subsystem": "vmd", 00:19:28.051 "config": [] 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "subsystem": "accel", 00:19:28.051 "config": [ 00:19:28.051 { 00:19:28.051 "method": "accel_set_options", 00:19:28.051 "params": { 00:19:28.051 "small_cache_size": 128, 00:19:28.051 "large_cache_size": 16, 00:19:28.051 "task_count": 2048, 00:19:28.051 "sequence_count": 2048, 00:19:28.051 "buf_count": 2048 00:19:28.051 } 00:19:28.051 } 00:19:28.051 ] 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "subsystem": "bdev", 00:19:28.051 "config": [ 00:19:28.051 { 00:19:28.051 "method": "bdev_set_options", 00:19:28.051 "params": { 00:19:28.051 "bdev_io_pool_size": 65535, 00:19:28.051 "bdev_io_cache_size": 256, 00:19:28.051 "bdev_auto_examine": true, 00:19:28.051 "iobuf_small_cache_size": 128, 00:19:28.051 "iobuf_large_cache_size": 16 00:19:28.051 } 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "method": "bdev_raid_set_options", 00:19:28.051 "params": { 00:19:28.051 "process_window_size_kb": 1024, 00:19:28.051 "process_max_bandwidth_mb_sec": 0 00:19:28.051 } 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "method": "bdev_iscsi_set_options", 00:19:28.051 "params": { 00:19:28.051 "timeout_sec": 30 00:19:28.051 } 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "method": "bdev_nvme_set_options", 00:19:28.051 "params": { 00:19:28.051 "action_on_timeout": "none", 00:19:28.051 "timeout_us": 0, 00:19:28.051 "timeout_admin_us": 0, 00:19:28.051 "keep_alive_timeout_ms": 10000, 00:19:28.051 "arbitration_burst": 0, 00:19:28.051 "low_priority_weight": 0, 00:19:28.051 "medium_priority_weight": 0, 00:19:28.051 "high_priority_weight": 0, 00:19:28.051 "nvme_adminq_poll_period_us": 10000, 00:19:28.051 "nvme_ioq_poll_period_us": 0, 00:19:28.051 "io_queue_requests": 512, 00:19:28.051 "delay_cmd_submit": true, 00:19:28.051 "transport_retry_count": 4, 00:19:28.051 "bdev_retry_count": 3, 00:19:28.051 "transport_ack_timeout": 0, 00:19:28.051 "ctrlr_loss_timeout_sec": 0, 00:19:28.051 "reconnect_delay_sec": 0, 00:19:28.051 "fast_io_fail_timeout_sec": 0, 00:19:28.051 "disable_auto_failback": false, 00:19:28.051 "generate_uuids": false, 00:19:28.051 "transport_tos": 0, 00:19:28.051 "nvme_error_stat": false, 00:19:28.051 "rdma_srq_size": 0, 00:19:28.051 "io_path_stat": false, 00:19:28.051 "allow_accel_sequence": false, 00:19:28.051 "rdma_max_cq_size": 0, 00:19:28.051 "rdma_cm_event_timeout_ms": 0, 00:19:28.051 "dhchap_digests": [ 00:19:28.051 "sha256", 00:19:28.051 "sha384", 00:19:28.051 "sha512" 00:19:28.051 ], 00:19:28.051 "dhchap_dhgroups": [ 00:19:28.051 "null", 00:19:28.051 "ffdhe2048", 00:19:28.051 "ffdhe3072", 00:19:28.051 "ffdhe4096", 00:19:28.051 "ffdhe6144", 00:19:28.051 "ffdhe8192" 00:19:28.051 ] 00:19:28.051 } 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "method": "bdev_nvme_attach_controller", 00:19:28.051 "params": { 00:19:28.051 "name": "nvme0", 00:19:28.051 "trtype": "TCP", 00:19:28.051 "adrfam": "IPv4", 00:19:28.051 "traddr": "10.0.0.2", 00:19:28.051 "trsvcid": "4420", 00:19:28.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.051 "prchk_reftag": false, 00:19:28.051 "prchk_guard": false, 00:19:28.051 "ctrlr_loss_timeout_sec": 0, 00:19:28.051 "reconnect_delay_sec": 0, 00:19:28.051 "fast_io_fail_timeout_sec": 0, 00:19:28.051 "psk": "key0", 00:19:28.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.051 "hdgst": false, 00:19:28.051 "ddgst": false, 00:19:28.051 "multipath": "multipath" 00:19:28.051 } 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "method": "bdev_nvme_set_hotplug", 00:19:28.051 "params": { 00:19:28.051 "period_us": 100000, 00:19:28.051 "enable": false 00:19:28.051 } 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "method": "bdev_enable_histogram", 00:19:28.051 "params": { 00:19:28.051 "name": "nvme0n1", 00:19:28.051 "enable": true 00:19:28.051 } 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "method": "bdev_wait_for_examine" 00:19:28.051 } 00:19:28.051 ] 00:19:28.051 }, 00:19:28.051 { 00:19:28.051 "subsystem": "nbd", 00:19:28.051 "config": [] 00:19:28.051 } 00:19:28.051 ] 00:19:28.051 }' 00:19:28.051 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.051 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:28.051 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.051 [2024-11-20 09:10:04.793548] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:28.051 [2024-11-20 09:10:04.793647] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354713 ] 00:19:28.051 [2024-11-20 09:10:04.860016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.051 [2024-11-20 09:10:04.918601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.309 [2024-11-20 09:10:05.099294] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.309 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.309 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:28.309 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.309 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:28.567 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.567 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:28.824 Running I/O for 1 seconds... 00:19:29.787 3508.00 IOPS, 13.70 MiB/s 00:19:29.787 Latency(us) 00:19:29.787 [2024-11-20T08:10:06.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.787 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:29.787 Verification LBA range: start 0x0 length 0x2000 00:19:29.787 nvme0n1 : 1.02 3573.24 13.96 0.00 0.00 35508.17 5849.69 38059.43 00:19:29.787 [2024-11-20T08:10:06.816Z] =================================================================================================================== 00:19:29.787 [2024-11-20T08:10:06.816Z] Total : 3573.24 13.96 0.00 0.00 35508.17 5849.69 38059.43 00:19:29.787 { 00:19:29.787 "results": [ 00:19:29.787 { 00:19:29.787 "job": "nvme0n1", 00:19:29.787 "core_mask": "0x2", 00:19:29.787 "workload": "verify", 00:19:29.787 "status": "finished", 00:19:29.787 "verify_range": { 00:19:29.787 "start": 0, 00:19:29.787 "length": 8192 00:19:29.787 }, 00:19:29.787 "queue_depth": 128, 00:19:29.787 "io_size": 4096, 00:19:29.787 "runtime": 1.017565, 00:19:29.787 "iops": 3573.236107767071, 00:19:29.787 "mibps": 13.957953545965122, 00:19:29.787 "io_failed": 0, 00:19:29.787 "io_timeout": 0, 00:19:29.787 "avg_latency_us": 35508.16942672045, 00:19:29.787 "min_latency_us": 5849.694814814815, 00:19:29.787 "max_latency_us": 38059.42518518519 00:19:29.787 } 00:19:29.787 ], 00:19:29.787 "core_count": 1 00:19:29.787 } 00:19:29.787 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:29.787 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:29.788 nvmf_trace.0 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3354713 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3354713 ']' 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3354713 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3354713 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3354713' 00:19:29.788 killing process with pid 3354713 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3354713 00:19:29.788 Received shutdown signal, test time was about 1.000000 seconds 00:19:29.788 00:19:29.788 Latency(us) 00:19:29.788 [2024-11-20T08:10:06.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.788 [2024-11-20T08:10:06.817Z] =================================================================================================================== 00:19:29.788 [2024-11-20T08:10:06.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.788 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3354713 00:19:30.070 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:30.071 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.071 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:30.071 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.071 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:30.071 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.071 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.071 rmmod nvme_tcp 00:19:30.071 rmmod nvme_fabrics 00:19:30.071 rmmod nvme_keyring 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3354561 ']' 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3354561 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3354561 ']' 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3354561 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3354561 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:30.071 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3354561' 00:19:30.329 killing process with pid 3354561 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3354561 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3354561 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.329 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.vVrSOQmaJp /tmp/tmp.Mu04h2m32V /tmp/tmp.PrAtAO5Ubf 00:19:32.864 00:19:32.864 real 1m22.625s 00:19:32.864 user 2m18.535s 00:19:32.864 sys 0m24.895s 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.864 ************************************ 00:19:32.864 END TEST nvmf_tls 00:19:32.864 ************************************ 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.864 ************************************ 00:19:32.864 START TEST nvmf_fips 00:19:32.864 ************************************ 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:32.864 * Looking for test storage... 00:19:32.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.864 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:32.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.865 --rc genhtml_branch_coverage=1 00:19:32.865 --rc genhtml_function_coverage=1 00:19:32.865 --rc genhtml_legend=1 00:19:32.865 --rc geninfo_all_blocks=1 00:19:32.865 --rc geninfo_unexecuted_blocks=1 00:19:32.865 00:19:32.865 ' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:32.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.865 --rc genhtml_branch_coverage=1 00:19:32.865 --rc genhtml_function_coverage=1 00:19:32.865 --rc genhtml_legend=1 00:19:32.865 --rc geninfo_all_blocks=1 00:19:32.865 --rc geninfo_unexecuted_blocks=1 00:19:32.865 00:19:32.865 ' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:32.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.865 --rc genhtml_branch_coverage=1 00:19:32.865 --rc genhtml_function_coverage=1 00:19:32.865 --rc genhtml_legend=1 00:19:32.865 --rc geninfo_all_blocks=1 00:19:32.865 --rc geninfo_unexecuted_blocks=1 00:19:32.865 00:19:32.865 ' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:32.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.865 --rc genhtml_branch_coverage=1 00:19:32.865 --rc genhtml_function_coverage=1 00:19:32.865 --rc genhtml_legend=1 00:19:32.865 --rc geninfo_all_blocks=1 00:19:32.865 --rc geninfo_unexecuted_blocks=1 00:19:32.865 00:19:32.865 ' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:32.865 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:32.866 Error setting digest 00:19:32.866 4042B38E477F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:32.866 4042B38E477F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.866 09:10:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:35.399 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:35.399 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:35.399 Found net devices under 0000:09:00.0: cvl_0_0 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.399 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:35.399 Found net devices under 0000:09:00.1: cvl_0_1 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:35.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:19:35.400 00:19:35.400 --- 10.0.0.2 ping statistics --- 00:19:35.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.400 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:19:35.400 00:19:35.400 --- 10.0.0.1 ping statistics --- 00:19:35.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.400 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3356958 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3356958 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3356958 ']' 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.400 09:10:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:35.400 [2024-11-20 09:10:12.056456] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:35.400 [2024-11-20 09:10:12.056556] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.400 [2024-11-20 09:10:12.131133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.400 [2024-11-20 09:10:12.189980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.400 [2024-11-20 09:10:12.190037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.400 [2024-11-20 09:10:12.190065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.400 [2024-11-20 09:10:12.190077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.400 [2024-11-20 09:10:12.190087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.400 [2024-11-20 09:10:12.190729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.x0p 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.x0p 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.x0p 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.x0p 00:19:35.400 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.659 [2024-11-20 09:10:12.596414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.659 [2024-11-20 09:10:12.612385] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.659 [2024-11-20 09:10:12.612665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.659 malloc0 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3357107 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3357107 /var/tmp/bdevperf.sock 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3357107 ']' 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.659 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:35.918 [2024-11-20 09:10:12.745116] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:35.918 [2024-11-20 09:10:12.745208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3357107 ] 00:19:35.918 [2024-11-20 09:10:12.811162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.918 [2024-11-20 09:10:12.870422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.176 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:36.176 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:36.176 09:10:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.x0p 00:19:36.434 09:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.692 [2024-11-20 09:10:13.497405] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.693 TLSTESTn1 00:19:36.693 09:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:36.693 Running I/O for 10 seconds... 00:19:39.000 3347.00 IOPS, 13.07 MiB/s [2024-11-20T08:10:16.982Z] 3437.50 IOPS, 13.43 MiB/s [2024-11-20T08:10:17.918Z] 3467.00 IOPS, 13.54 MiB/s [2024-11-20T08:10:18.850Z] 3486.25 IOPS, 13.62 MiB/s [2024-11-20T08:10:19.791Z] 3493.20 IOPS, 13.65 MiB/s [2024-11-20T08:10:20.722Z] 3497.17 IOPS, 13.66 MiB/s [2024-11-20T08:10:22.095Z] 3490.43 IOPS, 13.63 MiB/s [2024-11-20T08:10:23.028Z] 3495.38 IOPS, 13.65 MiB/s [2024-11-20T08:10:23.963Z] 3493.78 IOPS, 13.65 MiB/s [2024-11-20T08:10:23.963Z] 3494.50 IOPS, 13.65 MiB/s 00:19:46.934 Latency(us) 00:19:46.934 [2024-11-20T08:10:23.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.934 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:46.934 Verification LBA range: start 0x0 length 0x2000 00:19:46.934 TLSTESTn1 : 10.02 3500.45 13.67 0.00 0.00 36507.68 6796.33 49321.91 00:19:46.934 [2024-11-20T08:10:23.963Z] =================================================================================================================== 00:19:46.934 [2024-11-20T08:10:23.963Z] Total : 3500.45 13.67 0.00 0.00 36507.68 6796.33 49321.91 00:19:46.934 { 00:19:46.934 "results": [ 00:19:46.934 { 00:19:46.934 "job": "TLSTESTn1", 00:19:46.934 "core_mask": "0x4", 00:19:46.934 "workload": "verify", 00:19:46.934 "status": "finished", 00:19:46.934 "verify_range": { 00:19:46.934 "start": 0, 00:19:46.934 "length": 8192 00:19:46.934 }, 00:19:46.934 "queue_depth": 128, 00:19:46.934 "io_size": 4096, 00:19:46.934 "runtime": 10.019278, 00:19:46.934 "iops": 3500.451828964123, 00:19:46.934 "mibps": 13.673639956891105, 00:19:46.934 "io_failed": 0, 00:19:46.934 "io_timeout": 0, 00:19:46.934 "avg_latency_us": 36507.67969721546, 00:19:46.934 "min_latency_us": 6796.325925925926, 00:19:46.934 "max_latency_us": 49321.90814814815 00:19:46.934 } 00:19:46.934 ], 00:19:46.934 "core_count": 1 00:19:46.934 } 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:46.934 nvmf_trace.0 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3357107 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3357107 ']' 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3357107 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3357107 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3357107' 00:19:46.934 killing process with pid 3357107 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3357107 00:19:46.934 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.934 00:19:46.934 Latency(us) 00:19:46.934 [2024-11-20T08:10:23.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.934 [2024-11-20T08:10:23.963Z] =================================================================================================================== 00:19:46.934 [2024-11-20T08:10:23.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.934 09:10:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3357107 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:47.192 rmmod nvme_tcp 00:19:47.192 rmmod nvme_fabrics 00:19:47.192 rmmod nvme_keyring 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3356958 ']' 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3356958 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3356958 ']' 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3356958 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3356958 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:47.192 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:47.193 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3356958' 00:19:47.193 killing process with pid 3356958 00:19:47.193 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3356958 00:19:47.193 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3356958 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.451 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.x0p 00:19:49.989 00:19:49.989 real 0m17.073s 00:19:49.989 user 0m22.648s 00:19:49.989 sys 0m5.380s 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:49.989 ************************************ 00:19:49.989 END TEST nvmf_fips 00:19:49.989 ************************************ 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:49.989 ************************************ 00:19:49.989 START TEST nvmf_control_msg_list 00:19:49.989 ************************************ 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:49.989 * Looking for test storage... 00:19:49.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:49.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.989 --rc genhtml_branch_coverage=1 00:19:49.989 --rc genhtml_function_coverage=1 00:19:49.989 --rc genhtml_legend=1 00:19:49.989 --rc geninfo_all_blocks=1 00:19:49.989 --rc geninfo_unexecuted_blocks=1 00:19:49.989 00:19:49.989 ' 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:49.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.989 --rc genhtml_branch_coverage=1 00:19:49.989 --rc genhtml_function_coverage=1 00:19:49.989 --rc genhtml_legend=1 00:19:49.989 --rc geninfo_all_blocks=1 00:19:49.989 --rc geninfo_unexecuted_blocks=1 00:19:49.989 00:19:49.989 ' 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:49.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.989 --rc genhtml_branch_coverage=1 00:19:49.989 --rc genhtml_function_coverage=1 00:19:49.989 --rc genhtml_legend=1 00:19:49.989 --rc geninfo_all_blocks=1 00:19:49.989 --rc geninfo_unexecuted_blocks=1 00:19:49.989 00:19:49.989 ' 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:49.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.989 --rc genhtml_branch_coverage=1 00:19:49.989 --rc genhtml_function_coverage=1 00:19:49.989 --rc genhtml_legend=1 00:19:49.989 --rc geninfo_all_blocks=1 00:19:49.989 --rc geninfo_unexecuted_blocks=1 00:19:49.989 00:19:49.989 ' 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.989 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:49.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:49.990 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.895 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:51.895 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:51.896 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:51.896 Found net devices under 0000:09:00.0: cvl_0_0 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:51.896 Found net devices under 0000:09:00.1: cvl_0_1 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:51.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:19:51.896 00:19:51.896 --- 10.0.0.2 ping statistics --- 00:19:51.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.896 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:19:51.896 00:19:51.896 --- 10.0.0.1 ping statistics --- 00:19:51.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.896 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3360367 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3360367 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3360367 ']' 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.896 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.155 [2024-11-20 09:10:28.926419] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:52.155 [2024-11-20 09:10:28.926497] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.155 [2024-11-20 09:10:29.000109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.155 [2024-11-20 09:10:29.059100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.155 [2024-11-20 09:10:29.059164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.155 [2024-11-20 09:10:29.059177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.155 [2024-11-20 09:10:29.059188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.155 [2024-11-20 09:10:29.059210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.155 [2024-11-20 09:10:29.059846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.155 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:52.155 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:19:52.155 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.155 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.155 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.412 [2024-11-20 09:10:29.204000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.412 Malloc0 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.412 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:52.413 [2024-11-20 09:10:29.244324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3360395 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3360396 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3360397 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3360395 00:19:52.413 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:52.413 [2024-11-20 09:10:29.302797] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:52.413 [2024-11-20 09:10:29.312850] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:52.413 [2024-11-20 09:10:29.313053] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:53.786 Initializing NVMe Controllers 00:19:53.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:53.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:53.786 Initialization complete. Launching workers. 00:19:53.786 ======================================================== 00:19:53.786 Latency(us) 00:19:53.786 Device Information : IOPS MiB/s Average min max 00:19:53.786 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3531.00 13.79 282.85 154.99 630.89 00:19:53.786 ======================================================== 00:19:53.786 Total : 3531.00 13.79 282.85 154.99 630.89 00:19:53.786 00:19:53.786 Initializing NVMe Controllers 00:19:53.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:53.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:53.786 Initialization complete. Launching workers. 00:19:53.786 ======================================================== 00:19:53.786 Latency(us) 00:19:53.786 Device Information : IOPS MiB/s Average min max 00:19:53.786 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3265.00 12.75 305.91 185.42 40915.46 00:19:53.786 ======================================================== 00:19:53.786 Total : 3265.00 12.75 305.91 185.42 40915.46 00:19:53.786 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3360396 00:19:53.786 Initializing NVMe Controllers 00:19:53.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:53.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:53.786 Initialization complete. Launching workers. 00:19:53.786 ======================================================== 00:19:53.786 Latency(us) 00:19:53.786 Device Information : IOPS MiB/s Average min max 00:19:53.786 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3502.98 13.68 285.03 152.86 640.28 00:19:53.786 ======================================================== 00:19:53.786 Total : 3502.98 13.68 285.03 152.86 640.28 00:19:53.786 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3360397 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:53.786 rmmod nvme_tcp 00:19:53.786 rmmod nvme_fabrics 00:19:53.786 rmmod nvme_keyring 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3360367 ']' 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3360367 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3360367 ']' 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3360367 00:19:53.786 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3360367 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3360367' 00:19:53.787 killing process with pid 3360367 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3360367 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3360367 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.787 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.324 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:56.324 00:19:56.324 real 0m6.311s 00:19:56.324 user 0m5.427s 00:19:56.324 sys 0m2.738s 00:19:56.324 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:56.324 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:56.324 ************************************ 00:19:56.324 END TEST nvmf_control_msg_list 00:19:56.324 ************************************ 00:19:56.324 09:10:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:56.324 09:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:56.324 09:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:56.324 09:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:56.324 ************************************ 00:19:56.324 START TEST nvmf_wait_for_buf 00:19:56.324 ************************************ 00:19:56.325 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:56.325 * Looking for test storage... 00:19:56.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.325 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:56.325 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:56.325 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:56.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.325 --rc genhtml_branch_coverage=1 00:19:56.325 --rc genhtml_function_coverage=1 00:19:56.325 --rc genhtml_legend=1 00:19:56.325 --rc geninfo_all_blocks=1 00:19:56.325 --rc geninfo_unexecuted_blocks=1 00:19:56.325 00:19:56.325 ' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:56.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.325 --rc genhtml_branch_coverage=1 00:19:56.325 --rc genhtml_function_coverage=1 00:19:56.325 --rc genhtml_legend=1 00:19:56.325 --rc geninfo_all_blocks=1 00:19:56.325 --rc geninfo_unexecuted_blocks=1 00:19:56.325 00:19:56.325 ' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:56.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.325 --rc genhtml_branch_coverage=1 00:19:56.325 --rc genhtml_function_coverage=1 00:19:56.325 --rc genhtml_legend=1 00:19:56.325 --rc geninfo_all_blocks=1 00:19:56.325 --rc geninfo_unexecuted_blocks=1 00:19:56.325 00:19:56.325 ' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:56.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:56.325 --rc genhtml_branch_coverage=1 00:19:56.325 --rc genhtml_function_coverage=1 00:19:56.325 --rc genhtml_legend=1 00:19:56.325 --rc geninfo_all_blocks=1 00:19:56.325 --rc geninfo_unexecuted_blocks=1 00:19:56.325 00:19:56.325 ' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:56.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.325 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:56.326 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:56.326 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:56.326 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.326 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.326 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.326 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:56.326 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:56.326 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:56.326 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:58.229 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:58.229 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:58.229 Found net devices under 0000:09:00.0: cvl_0_0 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:58.229 Found net devices under 0000:09:00.1: cvl_0_1 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:58.229 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.230 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:58.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:19:58.488 00:19:58.488 --- 10.0.0.2 ping statistics --- 00:19:58.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.488 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:19:58.488 00:19:58.488 --- 10.0.0.1 ping statistics --- 00:19:58.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.488 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3362589 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3362589 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3362589 ']' 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:58.488 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.488 [2024-11-20 09:10:35.344668] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:19:58.488 [2024-11-20 09:10:35.344747] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.488 [2024-11-20 09:10:35.414358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.488 [2024-11-20 09:10:35.470605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.488 [2024-11-20 09:10:35.470659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.488 [2024-11-20 09:10:35.470688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.488 [2024-11-20 09:10:35.470699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.488 [2024-11-20 09:10:35.470708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.488 [2024-11-20 09:10:35.471315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.746 Malloc0 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.746 [2024-11-20 09:10:35.714278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:58.746 [2024-11-20 09:10:35.738495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.746 09:10:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:59.004 [2024-11-20 09:10:35.832446] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:00.379 Initializing NVMe Controllers 00:20:00.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:00.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:00.379 Initialization complete. Launching workers. 00:20:00.379 ======================================================== 00:20:00.379 Latency(us) 00:20:00.379 Device Information : IOPS MiB/s Average min max 00:20:00.379 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33325.67 23972.49 63858.10 00:20:00.379 ======================================================== 00:20:00.379 Total : 125.00 15.62 33325.67 23972.49 63858.10 00:20:00.379 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:00.379 rmmod nvme_tcp 00:20:00.379 rmmod nvme_fabrics 00:20:00.379 rmmod nvme_keyring 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3362589 ']' 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3362589 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3362589 ']' 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3362589 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3362589 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3362589' 00:20:00.379 killing process with pid 3362589 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3362589 00:20:00.379 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3362589 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.637 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.542 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:02.542 00:20:02.542 real 0m6.661s 00:20:02.542 user 0m3.032s 00:20:02.542 sys 0m2.081s 00:20:02.542 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:02.542 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:02.542 ************************************ 00:20:02.542 END TEST nvmf_wait_for_buf 00:20:02.542 ************************************ 00:20:02.800 09:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:02.801 09:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:02.801 09:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:02.801 09:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:02.801 09:10:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:02.801 09:10:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:05.337 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:05.337 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:05.337 Found net devices under 0000:09:00.0: cvl_0_0 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:05.337 Found net devices under 0000:09:00.1: cvl_0_1 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:05.337 09:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:05.338 ************************************ 00:20:05.338 START TEST nvmf_perf_adq 00:20:05.338 ************************************ 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:05.338 * Looking for test storage... 00:20:05.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:05.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.338 --rc genhtml_branch_coverage=1 00:20:05.338 --rc genhtml_function_coverage=1 00:20:05.338 --rc genhtml_legend=1 00:20:05.338 --rc geninfo_all_blocks=1 00:20:05.338 --rc geninfo_unexecuted_blocks=1 00:20:05.338 00:20:05.338 ' 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:05.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.338 --rc genhtml_branch_coverage=1 00:20:05.338 --rc genhtml_function_coverage=1 00:20:05.338 --rc genhtml_legend=1 00:20:05.338 --rc geninfo_all_blocks=1 00:20:05.338 --rc geninfo_unexecuted_blocks=1 00:20:05.338 00:20:05.338 ' 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:05.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.338 --rc genhtml_branch_coverage=1 00:20:05.338 --rc genhtml_function_coverage=1 00:20:05.338 --rc genhtml_legend=1 00:20:05.338 --rc geninfo_all_blocks=1 00:20:05.338 --rc geninfo_unexecuted_blocks=1 00:20:05.338 00:20:05.338 ' 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:05.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.338 --rc genhtml_branch_coverage=1 00:20:05.338 --rc genhtml_function_coverage=1 00:20:05.338 --rc genhtml_legend=1 00:20:05.338 --rc geninfo_all_blocks=1 00:20:05.338 --rc geninfo_unexecuted_blocks=1 00:20:05.338 00:20:05.338 ' 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.338 09:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.338 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.339 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.339 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:05.339 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:05.339 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:07.314 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.314 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.314 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.314 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.314 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:07.315 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:07.315 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:07.315 Found net devices under 0000:09:00.0: cvl_0_0 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:07.315 Found net devices under 0000:09:00.1: cvl_0_1 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:07.315 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:07.885 09:10:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:09.788 09:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:15.067 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:15.067 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:15.067 Found net devices under 0000:09:00.0: cvl_0_0 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:15.067 Found net devices under 0000:09:00.1: cvl_0_1 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:15.067 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:15.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:20:15.068 00:20:15.068 --- 10.0.0.2 ping statistics --- 00:20:15.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.068 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:20:15.068 00:20:15.068 --- 10.0.0.1 ping statistics --- 00:20:15.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.068 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3367314 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3367314 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3367314 ']' 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:15.068 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.068 [2024-11-20 09:10:51.970358] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:20:15.068 [2024-11-20 09:10:51.970455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.068 [2024-11-20 09:10:52.041987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:15.326 [2024-11-20 09:10:52.103335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.326 [2024-11-20 09:10:52.103382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.326 [2024-11-20 09:10:52.103403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.326 [2024-11-20 09:10:52.103425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.326 [2024-11-20 09:10:52.103442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.326 [2024-11-20 09:10:52.105062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.326 [2024-11-20 09:10:52.105123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.326 [2024-11-20 09:10:52.105192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:15.326 [2024-11-20 09:10:52.105196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.326 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:15.326 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:20:15.326 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:15.326 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:15.326 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.327 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.585 [2024-11-20 09:10:52.375494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.585 Malloc1 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.585 [2024-11-20 09:10:52.443813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3367464 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:15.585 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:17.485 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:17.485 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.485 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.485 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.485 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:17.485 "tick_rate": 2700000000, 00:20:17.485 "poll_groups": [ 00:20:17.485 { 00:20:17.485 "name": "nvmf_tgt_poll_group_000", 00:20:17.485 "admin_qpairs": 1, 00:20:17.485 "io_qpairs": 1, 00:20:17.485 "current_admin_qpairs": 1, 00:20:17.485 "current_io_qpairs": 1, 00:20:17.485 "pending_bdev_io": 0, 00:20:17.485 "completed_nvme_io": 20173, 00:20:17.485 "transports": [ 00:20:17.485 { 00:20:17.485 "trtype": "TCP" 00:20:17.485 } 00:20:17.485 ] 00:20:17.485 }, 00:20:17.485 { 00:20:17.485 "name": "nvmf_tgt_poll_group_001", 00:20:17.485 "admin_qpairs": 0, 00:20:17.485 "io_qpairs": 1, 00:20:17.485 "current_admin_qpairs": 0, 00:20:17.485 "current_io_qpairs": 1, 00:20:17.485 "pending_bdev_io": 0, 00:20:17.485 "completed_nvme_io": 20418, 00:20:17.485 "transports": [ 00:20:17.485 { 00:20:17.485 "trtype": "TCP" 00:20:17.485 } 00:20:17.485 ] 00:20:17.485 }, 00:20:17.485 { 00:20:17.485 "name": "nvmf_tgt_poll_group_002", 00:20:17.485 "admin_qpairs": 0, 00:20:17.485 "io_qpairs": 1, 00:20:17.485 "current_admin_qpairs": 0, 00:20:17.485 "current_io_qpairs": 1, 00:20:17.485 "pending_bdev_io": 0, 00:20:17.485 "completed_nvme_io": 20403, 00:20:17.486 "transports": [ 00:20:17.486 { 00:20:17.486 "trtype": "TCP" 00:20:17.486 } 00:20:17.486 ] 00:20:17.486 }, 00:20:17.486 { 00:20:17.486 "name": "nvmf_tgt_poll_group_003", 00:20:17.486 "admin_qpairs": 0, 00:20:17.486 "io_qpairs": 1, 00:20:17.486 "current_admin_qpairs": 0, 00:20:17.486 "current_io_qpairs": 1, 00:20:17.486 "pending_bdev_io": 0, 00:20:17.486 "completed_nvme_io": 19781, 00:20:17.486 "transports": [ 00:20:17.486 { 00:20:17.486 "trtype": "TCP" 00:20:17.486 } 00:20:17.486 ] 00:20:17.486 } 00:20:17.486 ] 00:20:17.486 }' 00:20:17.486 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:17.486 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:17.486 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:17.486 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:17.486 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3367464 00:20:25.591 Initializing NVMe Controllers 00:20:25.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:25.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:25.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:25.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:25.591 Initialization complete. Launching workers. 00:20:25.591 ======================================================== 00:20:25.591 Latency(us) 00:20:25.591 Device Information : IOPS MiB/s Average min max 00:20:25.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10412.43 40.67 6148.24 2549.22 10251.77 00:20:25.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10713.81 41.85 5973.55 2183.83 9470.14 00:20:25.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10724.91 41.89 5966.76 2622.21 9734.94 00:20:25.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10687.02 41.75 5988.73 2637.47 9804.79 00:20:25.591 ======================================================== 00:20:25.591 Total : 42538.17 166.16 6018.41 2183.83 10251.77 00:20:25.591 00:20:25.591 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:25.591 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.591 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:25.591 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:25.591 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:25.591 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.591 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:25.591 rmmod nvme_tcp 00:20:25.849 rmmod nvme_fabrics 00:20:25.849 rmmod nvme_keyring 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3367314 ']' 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3367314 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3367314 ']' 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3367314 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3367314 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3367314' 00:20:25.849 killing process with pid 3367314 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3367314 00:20:25.849 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3367314 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.107 09:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.010 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.010 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:28.010 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:28.010 09:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:28.578 09:11:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:31.106 09:11:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:36.380 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:36.380 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.380 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.380 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.380 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:36.381 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:36.381 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:36.381 Found net devices under 0000:09:00.0: cvl_0_0 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:36.381 Found net devices under 0000:09:00.1: cvl_0_1 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.381 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:36.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:20:36.382 00:20:36.382 --- 10.0.0.2 ping statistics --- 00:20:36.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.382 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:20:36.382 00:20:36.382 --- 10.0.0.1 ping statistics --- 00:20:36.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.382 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:36.382 net.core.busy_poll = 1 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:36.382 net.core.busy_read = 1 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3370028 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3370028 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3370028 ']' 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:36.382 09:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.382 [2024-11-20 09:11:12.887099] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:20:36.382 [2024-11-20 09:11:12.887176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.382 [2024-11-20 09:11:12.963069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.382 [2024-11-20 09:11:13.021813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.382 [2024-11-20 09:11:13.021877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.382 [2024-11-20 09:11:13.021897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.382 [2024-11-20 09:11:13.021914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.382 [2024-11-20 09:11:13.021928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.382 [2024-11-20 09:11:13.027324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.382 [2024-11-20 09:11:13.027392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.382 [2024-11-20 09:11:13.027459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.382 [2024-11-20 09:11:13.027463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.382 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 [2024-11-20 09:11:13.294844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 Malloc1 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.383 [2024-11-20 09:11:13.353416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3370111 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:36.383 09:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:38.912 "tick_rate": 2700000000, 00:20:38.912 "poll_groups": [ 00:20:38.912 { 00:20:38.912 "name": "nvmf_tgt_poll_group_000", 00:20:38.912 "admin_qpairs": 1, 00:20:38.912 "io_qpairs": 2, 00:20:38.912 "current_admin_qpairs": 1, 00:20:38.912 "current_io_qpairs": 2, 00:20:38.912 "pending_bdev_io": 0, 00:20:38.912 "completed_nvme_io": 25499, 00:20:38.912 "transports": [ 00:20:38.912 { 00:20:38.912 "trtype": "TCP" 00:20:38.912 } 00:20:38.912 ] 00:20:38.912 }, 00:20:38.912 { 00:20:38.912 "name": "nvmf_tgt_poll_group_001", 00:20:38.912 "admin_qpairs": 0, 00:20:38.912 "io_qpairs": 2, 00:20:38.912 "current_admin_qpairs": 0, 00:20:38.912 "current_io_qpairs": 2, 00:20:38.912 "pending_bdev_io": 0, 00:20:38.912 "completed_nvme_io": 25343, 00:20:38.912 "transports": [ 00:20:38.912 { 00:20:38.912 "trtype": "TCP" 00:20:38.912 } 00:20:38.912 ] 00:20:38.912 }, 00:20:38.912 { 00:20:38.912 "name": "nvmf_tgt_poll_group_002", 00:20:38.912 "admin_qpairs": 0, 00:20:38.912 "io_qpairs": 0, 00:20:38.912 "current_admin_qpairs": 0, 00:20:38.912 "current_io_qpairs": 0, 00:20:38.912 "pending_bdev_io": 0, 00:20:38.912 "completed_nvme_io": 0, 00:20:38.912 "transports": [ 00:20:38.912 { 00:20:38.912 "trtype": "TCP" 00:20:38.912 } 00:20:38.912 ] 00:20:38.912 }, 00:20:38.912 { 00:20:38.912 "name": "nvmf_tgt_poll_group_003", 00:20:38.912 "admin_qpairs": 0, 00:20:38.912 "io_qpairs": 0, 00:20:38.912 "current_admin_qpairs": 0, 00:20:38.912 "current_io_qpairs": 0, 00:20:38.912 "pending_bdev_io": 0, 00:20:38.912 "completed_nvme_io": 0, 00:20:38.912 "transports": [ 00:20:38.912 { 00:20:38.912 "trtype": "TCP" 00:20:38.912 } 00:20:38.912 ] 00:20:38.912 } 00:20:38.912 ] 00:20:38.912 }' 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:38.912 09:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3370111 00:20:47.018 Initializing NVMe Controllers 00:20:47.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:47.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:47.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:47.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:47.018 Initialization complete. Launching workers. 00:20:47.019 ======================================================== 00:20:47.019 Latency(us) 00:20:47.019 Device Information : IOPS MiB/s Average min max 00:20:47.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6088.10 23.78 10517.36 1784.96 54305.34 00:20:47.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6130.80 23.95 10438.82 1074.74 55283.86 00:20:47.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7130.60 27.85 8976.79 1710.25 53594.92 00:20:47.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7391.40 28.87 8697.32 1529.11 53586.91 00:20:47.019 ======================================================== 00:20:47.019 Total : 26740.90 104.46 9585.48 1074.74 55283.86 00:20:47.019 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.019 rmmod nvme_tcp 00:20:47.019 rmmod nvme_fabrics 00:20:47.019 rmmod nvme_keyring 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3370028 ']' 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3370028 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3370028 ']' 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3370028 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3370028 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3370028' 00:20:47.019 killing process with pid 3370028 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3370028 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3370028 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.019 09:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.309 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:50.309 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:50.309 00:20:50.309 real 0m45.093s 00:20:50.309 user 2m40.514s 00:20:50.309 sys 0m9.236s 00:20:50.309 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:50.309 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:50.309 ************************************ 00:20:50.309 END TEST nvmf_perf_adq 00:20:50.309 ************************************ 00:20:50.309 09:11:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:50.309 09:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:50.309 09:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:50.309 09:11:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.309 ************************************ 00:20:50.309 START TEST nvmf_shutdown 00:20:50.309 ************************************ 00:20:50.309 09:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:50.309 * Looking for test storage... 00:20:50.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:50.309 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:50.309 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:20:50.309 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:50.309 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:50.309 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.309 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.309 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.309 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:50.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.310 --rc genhtml_branch_coverage=1 00:20:50.310 --rc genhtml_function_coverage=1 00:20:50.310 --rc genhtml_legend=1 00:20:50.310 --rc geninfo_all_blocks=1 00:20:50.310 --rc geninfo_unexecuted_blocks=1 00:20:50.310 00:20:50.310 ' 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:50.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.310 --rc genhtml_branch_coverage=1 00:20:50.310 --rc genhtml_function_coverage=1 00:20:50.310 --rc genhtml_legend=1 00:20:50.310 --rc geninfo_all_blocks=1 00:20:50.310 --rc geninfo_unexecuted_blocks=1 00:20:50.310 00:20:50.310 ' 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:50.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.310 --rc genhtml_branch_coverage=1 00:20:50.310 --rc genhtml_function_coverage=1 00:20:50.310 --rc genhtml_legend=1 00:20:50.310 --rc geninfo_all_blocks=1 00:20:50.310 --rc geninfo_unexecuted_blocks=1 00:20:50.310 00:20:50.310 ' 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:50.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.310 --rc genhtml_branch_coverage=1 00:20:50.310 --rc genhtml_function_coverage=1 00:20:50.310 --rc genhtml_legend=1 00:20:50.310 --rc geninfo_all_blocks=1 00:20:50.310 --rc geninfo_unexecuted_blocks=1 00:20:50.310 00:20:50.310 ' 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:50.310 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:50.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:50.311 ************************************ 00:20:50.311 START TEST nvmf_shutdown_tc1 00:20:50.311 ************************************ 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:50.311 09:11:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:52.901 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:52.901 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:52.902 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:52.902 Found net devices under 0000:09:00.0: cvl_0_0 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:52.902 Found net devices under 0000:09:00.1: cvl_0_1 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:20:52.902 00:20:52.902 --- 10.0.0.2 ping statistics --- 00:20:52.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.902 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:20:52.902 00:20:52.902 --- 10.0.0.1 ping statistics --- 00:20:52.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.902 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3373414 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3373414 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3373414 ']' 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:52.902 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:52.902 [2024-11-20 09:11:29.688969] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:20:52.902 [2024-11-20 09:11:29.689062] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.902 [2024-11-20 09:11:29.764004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.902 [2024-11-20 09:11:29.824600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.902 [2024-11-20 09:11:29.824670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.903 [2024-11-20 09:11:29.824683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.903 [2024-11-20 09:11:29.824694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.903 [2024-11-20 09:11:29.824704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.903 [2024-11-20 09:11:29.826463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.903 [2024-11-20 09:11:29.826510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.903 [2024-11-20 09:11:29.826558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:52.903 [2024-11-20 09:11:29.826562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.161 [2024-11-20 09:11:29.970068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.161 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:53.161 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:53.161 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.161 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.161 Malloc1 00:20:53.161 [2024-11-20 09:11:30.054385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.161 Malloc2 00:20:53.161 Malloc3 00:20:53.161 Malloc4 00:20:53.419 Malloc5 00:20:53.419 Malloc6 00:20:53.419 Malloc7 00:20:53.419 Malloc8 00:20:53.419 Malloc9 00:20:53.677 Malloc10 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3373594 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3373594 /var/tmp/bdevperf.sock 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3373594 ']' 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.677 { 00:20:53.677 "params": { 00:20:53.677 "name": "Nvme$subsystem", 00:20:53.677 "trtype": "$TEST_TRANSPORT", 00:20:53.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.677 "adrfam": "ipv4", 00:20:53.677 "trsvcid": "$NVMF_PORT", 00:20:53.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.677 "hdgst": ${hdgst:-false}, 00:20:53.677 "ddgst": ${ddgst:-false} 00:20:53.677 }, 00:20:53.677 "method": "bdev_nvme_attach_controller" 00:20:53.677 } 00:20:53.677 EOF 00:20:53.677 )") 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.677 { 00:20:53.677 "params": { 00:20:53.677 "name": "Nvme$subsystem", 00:20:53.677 "trtype": "$TEST_TRANSPORT", 00:20:53.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.677 "adrfam": "ipv4", 00:20:53.677 "trsvcid": "$NVMF_PORT", 00:20:53.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.677 "hdgst": ${hdgst:-false}, 00:20:53.677 "ddgst": ${ddgst:-false} 00:20:53.677 }, 00:20:53.677 "method": "bdev_nvme_attach_controller" 00:20:53.677 } 00:20:53.677 EOF 00:20:53.677 )") 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.677 { 00:20:53.677 "params": { 00:20:53.677 "name": "Nvme$subsystem", 00:20:53.677 "trtype": "$TEST_TRANSPORT", 00:20:53.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.677 "adrfam": "ipv4", 00:20:53.677 "trsvcid": "$NVMF_PORT", 00:20:53.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.677 "hdgst": ${hdgst:-false}, 00:20:53.677 "ddgst": ${ddgst:-false} 00:20:53.677 }, 00:20:53.677 "method": "bdev_nvme_attach_controller" 00:20:53.677 } 00:20:53.677 EOF 00:20:53.677 )") 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.677 { 00:20:53.677 "params": { 00:20:53.677 "name": "Nvme$subsystem", 00:20:53.677 "trtype": "$TEST_TRANSPORT", 00:20:53.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.677 "adrfam": "ipv4", 00:20:53.677 "trsvcid": "$NVMF_PORT", 00:20:53.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.677 "hdgst": ${hdgst:-false}, 00:20:53.677 "ddgst": ${ddgst:-false} 00:20:53.677 }, 00:20:53.677 "method": "bdev_nvme_attach_controller" 00:20:53.677 } 00:20:53.677 EOF 00:20:53.677 )") 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.677 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.677 { 00:20:53.677 "params": { 00:20:53.678 "name": "Nvme$subsystem", 00:20:53.678 "trtype": "$TEST_TRANSPORT", 00:20:53.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.678 "adrfam": "ipv4", 00:20:53.678 "trsvcid": "$NVMF_PORT", 00:20:53.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.678 "hdgst": ${hdgst:-false}, 00:20:53.678 "ddgst": ${ddgst:-false} 00:20:53.678 }, 00:20:53.678 "method": "bdev_nvme_attach_controller" 00:20:53.678 } 00:20:53.678 EOF 00:20:53.678 )") 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.678 { 00:20:53.678 "params": { 00:20:53.678 "name": "Nvme$subsystem", 00:20:53.678 "trtype": "$TEST_TRANSPORT", 00:20:53.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.678 "adrfam": "ipv4", 00:20:53.678 "trsvcid": "$NVMF_PORT", 00:20:53.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.678 "hdgst": ${hdgst:-false}, 00:20:53.678 "ddgst": ${ddgst:-false} 00:20:53.678 }, 00:20:53.678 "method": "bdev_nvme_attach_controller" 00:20:53.678 } 00:20:53.678 EOF 00:20:53.678 )") 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.678 { 00:20:53.678 "params": { 00:20:53.678 "name": "Nvme$subsystem", 00:20:53.678 "trtype": "$TEST_TRANSPORT", 00:20:53.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.678 "adrfam": "ipv4", 00:20:53.678 "trsvcid": "$NVMF_PORT", 00:20:53.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.678 "hdgst": ${hdgst:-false}, 00:20:53.678 "ddgst": ${ddgst:-false} 00:20:53.678 }, 00:20:53.678 "method": "bdev_nvme_attach_controller" 00:20:53.678 } 00:20:53.678 EOF 00:20:53.678 )") 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.678 { 00:20:53.678 "params": { 00:20:53.678 "name": "Nvme$subsystem", 00:20:53.678 "trtype": "$TEST_TRANSPORT", 00:20:53.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.678 "adrfam": "ipv4", 00:20:53.678 "trsvcid": "$NVMF_PORT", 00:20:53.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.678 "hdgst": ${hdgst:-false}, 00:20:53.678 "ddgst": ${ddgst:-false} 00:20:53.678 }, 00:20:53.678 "method": "bdev_nvme_attach_controller" 00:20:53.678 } 00:20:53.678 EOF 00:20:53.678 )") 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.678 { 00:20:53.678 "params": { 00:20:53.678 "name": "Nvme$subsystem", 00:20:53.678 "trtype": "$TEST_TRANSPORT", 00:20:53.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.678 "adrfam": "ipv4", 00:20:53.678 "trsvcid": "$NVMF_PORT", 00:20:53.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.678 "hdgst": ${hdgst:-false}, 00:20:53.678 "ddgst": ${ddgst:-false} 00:20:53.678 }, 00:20:53.678 "method": "bdev_nvme_attach_controller" 00:20:53.678 } 00:20:53.678 EOF 00:20:53.678 )") 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.678 { 00:20:53.678 "params": { 00:20:53.678 "name": "Nvme$subsystem", 00:20:53.678 "trtype": "$TEST_TRANSPORT", 00:20:53.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.678 "adrfam": "ipv4", 00:20:53.678 "trsvcid": "$NVMF_PORT", 00:20:53.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.678 "hdgst": ${hdgst:-false}, 00:20:53.678 "ddgst": ${ddgst:-false} 00:20:53.678 }, 00:20:53.678 "method": "bdev_nvme_attach_controller" 00:20:53.678 } 00:20:53.678 EOF 00:20:53.678 )") 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:53.678 09:11:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:53.678 "params": { 00:20:53.678 "name": "Nvme1", 00:20:53.678 "trtype": "tcp", 00:20:53.678 "traddr": "10.0.0.2", 00:20:53.678 "adrfam": "ipv4", 00:20:53.678 "trsvcid": "4420", 00:20:53.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.678 "hdgst": false, 00:20:53.678 "ddgst": false 00:20:53.678 }, 00:20:53.678 "method": "bdev_nvme_attach_controller" 00:20:53.678 },{ 00:20:53.678 "params": { 00:20:53.678 "name": "Nvme2", 00:20:53.678 "trtype": "tcp", 00:20:53.678 "traddr": "10.0.0.2", 00:20:53.678 "adrfam": "ipv4", 00:20:53.678 "trsvcid": "4420", 00:20:53.678 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:53.678 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.678 "hdgst": false, 00:20:53.678 "ddgst": false 00:20:53.678 }, 00:20:53.678 "method": "bdev_nvme_attach_controller" 00:20:53.678 },{ 00:20:53.678 "params": { 00:20:53.678 "name": "Nvme3", 00:20:53.678 "trtype": "tcp", 00:20:53.678 "traddr": "10.0.0.2", 00:20:53.678 "adrfam": "ipv4", 00:20:53.678 "trsvcid": "4420", 00:20:53.678 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:53.678 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:53.678 "hdgst": false, 00:20:53.678 "ddgst": false 00:20:53.678 }, 00:20:53.678 "method": "bdev_nvme_attach_controller" 00:20:53.678 },{ 00:20:53.679 "params": { 00:20:53.679 "name": "Nvme4", 00:20:53.679 "trtype": "tcp", 00:20:53.679 "traddr": "10.0.0.2", 00:20:53.679 "adrfam": "ipv4", 00:20:53.679 "trsvcid": "4420", 00:20:53.679 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:53.679 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:53.679 "hdgst": false, 00:20:53.679 "ddgst": false 00:20:53.679 }, 00:20:53.679 "method": "bdev_nvme_attach_controller" 00:20:53.679 },{ 00:20:53.679 "params": { 00:20:53.679 "name": "Nvme5", 00:20:53.679 "trtype": "tcp", 00:20:53.679 "traddr": "10.0.0.2", 00:20:53.679 "adrfam": "ipv4", 00:20:53.679 "trsvcid": "4420", 00:20:53.679 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:53.679 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:53.679 "hdgst": false, 00:20:53.679 "ddgst": false 00:20:53.679 }, 00:20:53.679 "method": "bdev_nvme_attach_controller" 00:20:53.679 },{ 00:20:53.679 "params": { 00:20:53.679 "name": "Nvme6", 00:20:53.679 "trtype": "tcp", 00:20:53.679 "traddr": "10.0.0.2", 00:20:53.679 "adrfam": "ipv4", 00:20:53.679 "trsvcid": "4420", 00:20:53.679 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:53.679 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:53.679 "hdgst": false, 00:20:53.679 "ddgst": false 00:20:53.679 }, 00:20:53.679 "method": "bdev_nvme_attach_controller" 00:20:53.679 },{ 00:20:53.679 "params": { 00:20:53.679 "name": "Nvme7", 00:20:53.679 "trtype": "tcp", 00:20:53.679 "traddr": "10.0.0.2", 00:20:53.679 "adrfam": "ipv4", 00:20:53.679 "trsvcid": "4420", 00:20:53.679 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:53.679 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:53.679 "hdgst": false, 00:20:53.679 "ddgst": false 00:20:53.679 }, 00:20:53.679 "method": "bdev_nvme_attach_controller" 00:20:53.679 },{ 00:20:53.679 "params": { 00:20:53.679 "name": "Nvme8", 00:20:53.679 "trtype": "tcp", 00:20:53.679 "traddr": "10.0.0.2", 00:20:53.679 "adrfam": "ipv4", 00:20:53.679 "trsvcid": "4420", 00:20:53.679 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:53.679 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:53.679 "hdgst": false, 00:20:53.679 "ddgst": false 00:20:53.679 }, 00:20:53.679 "method": "bdev_nvme_attach_controller" 00:20:53.679 },{ 00:20:53.679 "params": { 00:20:53.679 "name": "Nvme9", 00:20:53.679 "trtype": "tcp", 00:20:53.679 "traddr": "10.0.0.2", 00:20:53.679 "adrfam": "ipv4", 00:20:53.679 "trsvcid": "4420", 00:20:53.679 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:53.679 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:53.679 "hdgst": false, 00:20:53.679 "ddgst": false 00:20:53.679 }, 00:20:53.679 "method": "bdev_nvme_attach_controller" 00:20:53.679 },{ 00:20:53.679 "params": { 00:20:53.679 "name": "Nvme10", 00:20:53.679 "trtype": "tcp", 00:20:53.679 "traddr": "10.0.0.2", 00:20:53.679 "adrfam": "ipv4", 00:20:53.679 "trsvcid": "4420", 00:20:53.679 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:53.679 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:53.679 "hdgst": false, 00:20:53.679 "ddgst": false 00:20:53.679 }, 00:20:53.679 "method": "bdev_nvme_attach_controller" 00:20:53.679 }' 00:20:53.679 [2024-11-20 09:11:30.553369] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:20:53.679 [2024-11-20 09:11:30.553456] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:53.679 [2024-11-20 09:11:30.626427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.679 [2024-11-20 09:11:30.687604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.576 09:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:55.576 09:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:55.576 09:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:55.576 09:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.576 09:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:55.576 09:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.576 09:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3373594 00:20:55.576 09:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:55.576 09:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:56.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3373594 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3373414 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.946 "hdgst": ${hdgst:-false}, 00:20:56.946 "ddgst": ${ddgst:-false} 00:20:56.946 }, 00:20:56.946 "method": "bdev_nvme_attach_controller" 00:20:56.946 } 00:20:56.946 EOF 00:20:56.946 )") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.946 "hdgst": ${hdgst:-false}, 00:20:56.946 "ddgst": ${ddgst:-false} 00:20:56.946 }, 00:20:56.946 "method": "bdev_nvme_attach_controller" 00:20:56.946 } 00:20:56.946 EOF 00:20:56.946 )") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.946 "hdgst": ${hdgst:-false}, 00:20:56.946 "ddgst": ${ddgst:-false} 00:20:56.946 }, 00:20:56.946 "method": "bdev_nvme_attach_controller" 00:20:56.946 } 00:20:56.946 EOF 00:20:56.946 )") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.946 "hdgst": ${hdgst:-false}, 00:20:56.946 "ddgst": ${ddgst:-false} 00:20:56.946 }, 00:20:56.946 "method": "bdev_nvme_attach_controller" 00:20:56.946 } 00:20:56.946 EOF 00:20:56.946 )") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.946 "hdgst": ${hdgst:-false}, 00:20:56.946 "ddgst": ${ddgst:-false} 00:20:56.946 }, 00:20:56.946 "method": "bdev_nvme_attach_controller" 00:20:56.946 } 00:20:56.946 EOF 00:20:56.946 )") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.946 "hdgst": ${hdgst:-false}, 00:20:56.946 "ddgst": ${ddgst:-false} 00:20:56.946 }, 00:20:56.946 "method": "bdev_nvme_attach_controller" 00:20:56.946 } 00:20:56.946 EOF 00:20:56.946 )") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.946 "hdgst": ${hdgst:-false}, 00:20:56.946 "ddgst": ${ddgst:-false} 00:20:56.946 }, 00:20:56.946 "method": "bdev_nvme_attach_controller" 00:20:56.946 } 00:20:56.946 EOF 00:20:56.946 )") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.946 "hdgst": ${hdgst:-false}, 00:20:56.946 "ddgst": ${ddgst:-false} 00:20:56.946 }, 00:20:56.946 "method": "bdev_nvme_attach_controller" 00:20:56.946 } 00:20:56.946 EOF 00:20:56.946 )") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.946 "hdgst": ${hdgst:-false}, 00:20:56.946 "ddgst": ${ddgst:-false} 00:20:56.946 }, 00:20:56.946 "method": "bdev_nvme_attach_controller" 00:20:56.946 } 00:20:56.946 EOF 00:20:56.946 )") 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:56.946 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:56.946 { 00:20:56.946 "params": { 00:20:56.946 "name": "Nvme$subsystem", 00:20:56.946 "trtype": "$TEST_TRANSPORT", 00:20:56.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.946 "adrfam": "ipv4", 00:20:56.946 "trsvcid": "$NVMF_PORT", 00:20:56.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.947 "hdgst": ${hdgst:-false}, 00:20:56.947 "ddgst": ${ddgst:-false} 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 } 00:20:56.947 EOF 00:20:56.947 )") 00:20:56.947 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:56.947 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:56.947 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:56.947 09:11:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme1", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 },{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme2", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 },{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme3", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 },{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme4", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 },{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme5", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 },{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme6", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 },{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme7", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 },{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme8", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 },{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme9", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 },{ 00:20:56.947 "params": { 00:20:56.947 "name": "Nvme10", 00:20:56.947 "trtype": "tcp", 00:20:56.947 "traddr": "10.0.0.2", 00:20:56.947 "adrfam": "ipv4", 00:20:56.947 "trsvcid": "4420", 00:20:56.947 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:56.947 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:56.947 "hdgst": false, 00:20:56.947 "ddgst": false 00:20:56.947 }, 00:20:56.947 "method": "bdev_nvme_attach_controller" 00:20:56.947 }' 00:20:56.947 [2024-11-20 09:11:33.611869] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:20:56.947 [2024-11-20 09:11:33.611949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3374016 ] 00:20:56.947 [2024-11-20 09:11:33.684614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.947 [2024-11-20 09:11:33.745010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.318 Running I/O for 1 seconds... 00:20:59.344 1808.00 IOPS, 113.00 MiB/s 00:20:59.344 Latency(us) 00:20:59.344 [2024-11-20T08:11:36.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.344 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme1n1 : 1.15 222.47 13.90 0.00 0.00 283005.72 22622.06 260978.92 00:20:59.344 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme2n1 : 1.10 236.61 14.79 0.00 0.00 261329.75 7184.69 260978.92 00:20:59.344 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme3n1 : 1.09 237.63 14.85 0.00 0.00 254787.72 9369.22 251658.24 00:20:59.344 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme4n1 : 1.11 231.19 14.45 0.00 0.00 260288.66 17670.45 260978.92 00:20:59.344 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme5n1 : 1.12 228.27 14.27 0.00 0.00 259200.19 20777.34 256318.58 00:20:59.344 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme6n1 : 1.12 229.25 14.33 0.00 0.00 253540.50 22719.15 254765.13 00:20:59.344 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme7n1 : 1.15 227.05 14.19 0.00 0.00 251246.91 3859.34 262532.36 00:20:59.344 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme8n1 : 1.19 266.11 16.63 0.00 0.00 210768.34 14272.28 256318.58 00:20:59.344 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme9n1 : 1.18 216.53 13.53 0.00 0.00 256585.58 21651.15 281173.71 00:20:59.344 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.344 Verification LBA range: start 0x0 length 0x400 00:20:59.344 Nvme10n1 : 1.20 267.75 16.73 0.00 0.00 204166.11 6213.78 274959.93 00:20:59.344 [2024-11-20T08:11:36.373Z] =================================================================================================================== 00:20:59.344 [2024-11-20T08:11:36.373Z] Total : 2362.86 147.68 0.00 0.00 247568.54 3859.34 281173.71 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.603 rmmod nvme_tcp 00:20:59.603 rmmod nvme_fabrics 00:20:59.603 rmmod nvme_keyring 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3373414 ']' 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3373414 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3373414 ']' 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3373414 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3373414 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3373414' 00:20:59.603 killing process with pid 3373414 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3373414 00:20:59.603 09:11:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3373414 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.170 09:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:02.703 00:21:02.703 real 0m11.931s 00:21:02.703 user 0m33.647s 00:21:02.703 sys 0m3.453s 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.703 ************************************ 00:21:02.703 END TEST nvmf_shutdown_tc1 00:21:02.703 ************************************ 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:02.703 ************************************ 00:21:02.703 START TEST nvmf_shutdown_tc2 00:21:02.703 ************************************ 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.703 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:02.704 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:02.704 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:02.704 Found net devices under 0000:09:00.0: cvl_0_0 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:02.704 Found net devices under 0000:09:00.1: cvl_0_1 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.704 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:02.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:21:02.705 00:21:02.705 --- 10.0.0.2 ping statistics --- 00:21:02.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.705 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:21:02.705 00:21:02.705 --- 10.0.0.1 ping statistics --- 00:21:02.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.705 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3374780 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3374780 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3374780 ']' 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.705 [2024-11-20 09:11:39.398191] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:02.705 [2024-11-20 09:11:39.398272] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.705 [2024-11-20 09:11:39.469744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.705 [2024-11-20 09:11:39.525578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.705 [2024-11-20 09:11:39.525647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.705 [2024-11-20 09:11:39.525661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.705 [2024-11-20 09:11:39.525678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.705 [2024-11-20 09:11:39.525688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.705 [2024-11-20 09:11:39.527157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.705 [2024-11-20 09:11:39.527217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.705 [2024-11-20 09:11:39.527282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:02.705 [2024-11-20 09:11:39.527285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.705 [2024-11-20 09:11:39.715894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.705 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.974 09:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:02.974 Malloc1 00:21:02.974 [2024-11-20 09:11:39.822870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.974 Malloc2 00:21:02.974 Malloc3 00:21:02.974 Malloc4 00:21:02.974 Malloc5 00:21:03.233 Malloc6 00:21:03.233 Malloc7 00:21:03.233 Malloc8 00:21:03.233 Malloc9 00:21:03.233 Malloc10 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3374958 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3374958 /var/tmp/bdevperf.sock 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3374958 ']' 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.492 { 00:21:03.492 "params": { 00:21:03.492 "name": "Nvme$subsystem", 00:21:03.492 "trtype": "$TEST_TRANSPORT", 00:21:03.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.492 "adrfam": "ipv4", 00:21:03.492 "trsvcid": "$NVMF_PORT", 00:21:03.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.492 "hdgst": ${hdgst:-false}, 00:21:03.492 "ddgst": ${ddgst:-false} 00:21:03.492 }, 00:21:03.492 "method": "bdev_nvme_attach_controller" 00:21:03.492 } 00:21:03.492 EOF 00:21:03.492 )") 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.492 { 00:21:03.492 "params": { 00:21:03.492 "name": "Nvme$subsystem", 00:21:03.492 "trtype": "$TEST_TRANSPORT", 00:21:03.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.492 "adrfam": "ipv4", 00:21:03.492 "trsvcid": "$NVMF_PORT", 00:21:03.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.492 "hdgst": ${hdgst:-false}, 00:21:03.492 "ddgst": ${ddgst:-false} 00:21:03.492 }, 00:21:03.492 "method": "bdev_nvme_attach_controller" 00:21:03.492 } 00:21:03.492 EOF 00:21:03.492 )") 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.492 { 00:21:03.492 "params": { 00:21:03.492 "name": "Nvme$subsystem", 00:21:03.492 "trtype": "$TEST_TRANSPORT", 00:21:03.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.492 "adrfam": "ipv4", 00:21:03.492 "trsvcid": "$NVMF_PORT", 00:21:03.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.492 "hdgst": ${hdgst:-false}, 00:21:03.492 "ddgst": ${ddgst:-false} 00:21:03.492 }, 00:21:03.492 "method": "bdev_nvme_attach_controller" 00:21:03.492 } 00:21:03.492 EOF 00:21:03.492 )") 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.492 { 00:21:03.492 "params": { 00:21:03.492 "name": "Nvme$subsystem", 00:21:03.492 "trtype": "$TEST_TRANSPORT", 00:21:03.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.492 "adrfam": "ipv4", 00:21:03.492 "trsvcid": "$NVMF_PORT", 00:21:03.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.492 "hdgst": ${hdgst:-false}, 00:21:03.492 "ddgst": ${ddgst:-false} 00:21:03.492 }, 00:21:03.492 "method": "bdev_nvme_attach_controller" 00:21:03.492 } 00:21:03.492 EOF 00:21:03.492 )") 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.492 { 00:21:03.492 "params": { 00:21:03.492 "name": "Nvme$subsystem", 00:21:03.492 "trtype": "$TEST_TRANSPORT", 00:21:03.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.492 "adrfam": "ipv4", 00:21:03.492 "trsvcid": "$NVMF_PORT", 00:21:03.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.492 "hdgst": ${hdgst:-false}, 00:21:03.492 "ddgst": ${ddgst:-false} 00:21:03.492 }, 00:21:03.492 "method": "bdev_nvme_attach_controller" 00:21:03.492 } 00:21:03.492 EOF 00:21:03.492 )") 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.492 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.492 { 00:21:03.492 "params": { 00:21:03.493 "name": "Nvme$subsystem", 00:21:03.493 "trtype": "$TEST_TRANSPORT", 00:21:03.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.493 "adrfam": "ipv4", 00:21:03.493 "trsvcid": "$NVMF_PORT", 00:21:03.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.493 "hdgst": ${hdgst:-false}, 00:21:03.493 "ddgst": ${ddgst:-false} 00:21:03.493 }, 00:21:03.493 "method": "bdev_nvme_attach_controller" 00:21:03.493 } 00:21:03.493 EOF 00:21:03.493 )") 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.493 { 00:21:03.493 "params": { 00:21:03.493 "name": "Nvme$subsystem", 00:21:03.493 "trtype": "$TEST_TRANSPORT", 00:21:03.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.493 "adrfam": "ipv4", 00:21:03.493 "trsvcid": "$NVMF_PORT", 00:21:03.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.493 "hdgst": ${hdgst:-false}, 00:21:03.493 "ddgst": ${ddgst:-false} 00:21:03.493 }, 00:21:03.493 "method": "bdev_nvme_attach_controller" 00:21:03.493 } 00:21:03.493 EOF 00:21:03.493 )") 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.493 { 00:21:03.493 "params": { 00:21:03.493 "name": "Nvme$subsystem", 00:21:03.493 "trtype": "$TEST_TRANSPORT", 00:21:03.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.493 "adrfam": "ipv4", 00:21:03.493 "trsvcid": "$NVMF_PORT", 00:21:03.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.493 "hdgst": ${hdgst:-false}, 00:21:03.493 "ddgst": ${ddgst:-false} 00:21:03.493 }, 00:21:03.493 "method": "bdev_nvme_attach_controller" 00:21:03.493 } 00:21:03.493 EOF 00:21:03.493 )") 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.493 { 00:21:03.493 "params": { 00:21:03.493 "name": "Nvme$subsystem", 00:21:03.493 "trtype": "$TEST_TRANSPORT", 00:21:03.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.493 "adrfam": "ipv4", 00:21:03.493 "trsvcid": "$NVMF_PORT", 00:21:03.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.493 "hdgst": ${hdgst:-false}, 00:21:03.493 "ddgst": ${ddgst:-false} 00:21:03.493 }, 00:21:03.493 "method": "bdev_nvme_attach_controller" 00:21:03.493 } 00:21:03.493 EOF 00:21:03.493 )") 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:03.493 { 00:21:03.493 "params": { 00:21:03.493 "name": "Nvme$subsystem", 00:21:03.493 "trtype": "$TEST_TRANSPORT", 00:21:03.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.493 "adrfam": "ipv4", 00:21:03.493 "trsvcid": "$NVMF_PORT", 00:21:03.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.493 "hdgst": ${hdgst:-false}, 00:21:03.493 "ddgst": ${ddgst:-false} 00:21:03.493 }, 00:21:03.493 "method": "bdev_nvme_attach_controller" 00:21:03.493 } 00:21:03.493 EOF 00:21:03.493 )") 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:03.493 09:11:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:03.493 "params": { 00:21:03.493 "name": "Nvme1", 00:21:03.493 "trtype": "tcp", 00:21:03.493 "traddr": "10.0.0.2", 00:21:03.493 "adrfam": "ipv4", 00:21:03.493 "trsvcid": "4420", 00:21:03.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.493 "hdgst": false, 00:21:03.493 "ddgst": false 00:21:03.493 }, 00:21:03.493 "method": "bdev_nvme_attach_controller" 00:21:03.493 },{ 00:21:03.493 "params": { 00:21:03.493 "name": "Nvme2", 00:21:03.493 "trtype": "tcp", 00:21:03.493 "traddr": "10.0.0.2", 00:21:03.493 "adrfam": "ipv4", 00:21:03.493 "trsvcid": "4420", 00:21:03.493 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:03.493 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:03.493 "hdgst": false, 00:21:03.493 "ddgst": false 00:21:03.493 }, 00:21:03.493 "method": "bdev_nvme_attach_controller" 00:21:03.493 },{ 00:21:03.493 "params": { 00:21:03.493 "name": "Nvme3", 00:21:03.493 "trtype": "tcp", 00:21:03.493 "traddr": "10.0.0.2", 00:21:03.493 "adrfam": "ipv4", 00:21:03.493 "trsvcid": "4420", 00:21:03.493 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:03.493 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:03.493 "hdgst": false, 00:21:03.493 "ddgst": false 00:21:03.493 }, 00:21:03.493 "method": "bdev_nvme_attach_controller" 00:21:03.493 },{ 00:21:03.493 "params": { 00:21:03.493 "name": "Nvme4", 00:21:03.493 "trtype": "tcp", 00:21:03.493 "traddr": "10.0.0.2", 00:21:03.493 "adrfam": "ipv4", 00:21:03.493 "trsvcid": "4420", 00:21:03.493 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:03.493 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:03.493 "hdgst": false, 00:21:03.493 "ddgst": false 00:21:03.493 }, 00:21:03.493 "method": "bdev_nvme_attach_controller" 00:21:03.493 },{ 00:21:03.493 "params": { 00:21:03.493 "name": "Nvme5", 00:21:03.494 "trtype": "tcp", 00:21:03.494 "traddr": "10.0.0.2", 00:21:03.494 "adrfam": "ipv4", 00:21:03.494 "trsvcid": "4420", 00:21:03.494 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:03.494 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:03.494 "hdgst": false, 00:21:03.494 "ddgst": false 00:21:03.494 }, 00:21:03.494 "method": "bdev_nvme_attach_controller" 00:21:03.494 },{ 00:21:03.494 "params": { 00:21:03.494 "name": "Nvme6", 00:21:03.494 "trtype": "tcp", 00:21:03.494 "traddr": "10.0.0.2", 00:21:03.494 "adrfam": "ipv4", 00:21:03.494 "trsvcid": "4420", 00:21:03.494 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:03.494 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:03.494 "hdgst": false, 00:21:03.494 "ddgst": false 00:21:03.494 }, 00:21:03.494 "method": "bdev_nvme_attach_controller" 00:21:03.494 },{ 00:21:03.494 "params": { 00:21:03.494 "name": "Nvme7", 00:21:03.494 "trtype": "tcp", 00:21:03.494 "traddr": "10.0.0.2", 00:21:03.494 "adrfam": "ipv4", 00:21:03.494 "trsvcid": "4420", 00:21:03.494 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:03.494 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:03.494 "hdgst": false, 00:21:03.494 "ddgst": false 00:21:03.494 }, 00:21:03.494 "method": "bdev_nvme_attach_controller" 00:21:03.494 },{ 00:21:03.494 "params": { 00:21:03.494 "name": "Nvme8", 00:21:03.494 "trtype": "tcp", 00:21:03.494 "traddr": "10.0.0.2", 00:21:03.494 "adrfam": "ipv4", 00:21:03.494 "trsvcid": "4420", 00:21:03.494 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:03.494 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:03.494 "hdgst": false, 00:21:03.494 "ddgst": false 00:21:03.494 }, 00:21:03.494 "method": "bdev_nvme_attach_controller" 00:21:03.494 },{ 00:21:03.494 "params": { 00:21:03.494 "name": "Nvme9", 00:21:03.494 "trtype": "tcp", 00:21:03.494 "traddr": "10.0.0.2", 00:21:03.494 "adrfam": "ipv4", 00:21:03.494 "trsvcid": "4420", 00:21:03.494 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:03.494 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:03.494 "hdgst": false, 00:21:03.494 "ddgst": false 00:21:03.494 }, 00:21:03.494 "method": "bdev_nvme_attach_controller" 00:21:03.494 },{ 00:21:03.494 "params": { 00:21:03.494 "name": "Nvme10", 00:21:03.494 "trtype": "tcp", 00:21:03.494 "traddr": "10.0.0.2", 00:21:03.494 "adrfam": "ipv4", 00:21:03.494 "trsvcid": "4420", 00:21:03.494 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:03.494 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:03.494 "hdgst": false, 00:21:03.494 "ddgst": false 00:21:03.494 }, 00:21:03.494 "method": "bdev_nvme_attach_controller" 00:21:03.494 }' 00:21:03.494 [2024-11-20 09:11:40.337164] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:03.494 [2024-11-20 09:11:40.337251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3374958 ] 00:21:03.494 [2024-11-20 09:11:40.411415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.494 [2024-11-20 09:11:40.472958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.395 Running I/O for 10 seconds... 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.653 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.654 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:05.654 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:05.654 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:05.912 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:06.169 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:06.169 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:06.169 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:06.169 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=137 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 137 -ge 100 ']' 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3374958 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3374958 ']' 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3374958 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:06.170 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3374958 00:21:06.428 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:06.428 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:06.428 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3374958' 00:21:06.428 killing process with pid 3374958 00:21:06.428 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3374958 00:21:06.428 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3374958 00:21:06.428 Received shutdown signal, test time was about 0.968135 seconds 00:21:06.428 00:21:06.428 Latency(us) 00:21:06.428 [2024-11-20T08:11:43.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.428 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme1n1 : 0.92 219.89 13.74 0.00 0.00 284702.56 6602.15 254765.13 00:21:06.428 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme2n1 : 0.93 206.95 12.93 0.00 0.00 299311.41 20388.98 256318.58 00:21:06.428 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme3n1 : 0.96 265.45 16.59 0.00 0.00 228853.00 19126.80 236123.78 00:21:06.428 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme4n1 : 0.95 273.22 17.08 0.00 0.00 216987.64 3640.89 254765.13 00:21:06.428 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme5n1 : 0.94 203.80 12.74 0.00 0.00 285456.06 22039.51 257872.02 00:21:06.428 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme6n1 : 0.96 267.03 16.69 0.00 0.00 213227.52 17670.45 256318.58 00:21:06.428 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme7n1 : 0.94 204.75 12.80 0.00 0.00 271743.56 20874.43 257872.02 00:21:06.428 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme8n1 : 0.97 264.65 16.54 0.00 0.00 206488.46 17379.18 256318.58 00:21:06.428 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme9n1 : 0.96 200.98 12.56 0.00 0.00 265366.19 25826.04 284280.60 00:21:06.428 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.428 Verification LBA range: start 0x0 length 0x400 00:21:06.428 Nvme10n1 : 0.95 202.88 12.68 0.00 0.00 256866.48 21068.61 259425.47 00:21:06.428 [2024-11-20T08:11:43.457Z] =================================================================================================================== 00:21:06.428 [2024-11-20T08:11:43.457Z] Total : 2309.59 144.35 0.00 0.00 248728.37 3640.89 284280.60 00:21:06.687 09:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3374780 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.620 rmmod nvme_tcp 00:21:07.620 rmmod nvme_fabrics 00:21:07.620 rmmod nvme_keyring 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3374780 ']' 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3374780 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3374780 ']' 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3374780 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3374780 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3374780' 00:21:07.620 killing process with pid 3374780 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3374780 00:21:07.620 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3374780 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.187 09:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.720 00:21:10.720 real 0m8.029s 00:21:10.720 user 0m25.268s 00:21:10.720 sys 0m1.501s 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.720 ************************************ 00:21:10.720 END TEST nvmf_shutdown_tc2 00:21:10.720 ************************************ 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:10.720 ************************************ 00:21:10.720 START TEST nvmf_shutdown_tc3 00:21:10.720 ************************************ 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.720 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:10.721 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:10.721 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:10.721 Found net devices under 0000:09:00.0: cvl_0_0 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:10.721 Found net devices under 0000:09:00.1: cvl_0_1 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:21:10.721 00:21:10.721 --- 10.0.0.2 ping statistics --- 00:21:10.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.721 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:21:10.721 00:21:10.721 --- 10.0.0.1 ping statistics --- 00:21:10.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.721 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3375879 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3375879 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3375879 ']' 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.721 [2024-11-20 09:11:47.448371] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:10.721 [2024-11-20 09:11:47.448460] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.721 [2024-11-20 09:11:47.520122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.721 [2024-11-20 09:11:47.580723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.721 [2024-11-20 09:11:47.580771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.721 [2024-11-20 09:11:47.580800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.721 [2024-11-20 09:11:47.580812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.721 [2024-11-20 09:11:47.580828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.721 [2024-11-20 09:11:47.582331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.721 [2024-11-20 09:11:47.582393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.721 [2024-11-20 09:11:47.582441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:10.721 [2024-11-20 09:11:47.582444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:10.721 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 [2024-11-20 09:11:47.755053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.979 09:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 Malloc1 00:21:10.979 [2024-11-20 09:11:47.864786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.979 Malloc2 00:21:10.979 Malloc3 00:21:10.979 Malloc4 00:21:11.236 Malloc5 00:21:11.236 Malloc6 00:21:11.236 Malloc7 00:21:11.236 Malloc8 00:21:11.236 Malloc9 00:21:11.496 Malloc10 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3376054 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3376054 /var/tmp/bdevperf.sock 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3376054 ']' 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.496 { 00:21:11.496 "params": { 00:21:11.496 "name": "Nvme$subsystem", 00:21:11.496 "trtype": "$TEST_TRANSPORT", 00:21:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.496 "adrfam": "ipv4", 00:21:11.496 "trsvcid": "$NVMF_PORT", 00:21:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.496 "hdgst": ${hdgst:-false}, 00:21:11.496 "ddgst": ${ddgst:-false} 00:21:11.496 }, 00:21:11.496 "method": "bdev_nvme_attach_controller" 00:21:11.496 } 00:21:11.496 EOF 00:21:11.496 )") 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.496 { 00:21:11.496 "params": { 00:21:11.496 "name": "Nvme$subsystem", 00:21:11.496 "trtype": "$TEST_TRANSPORT", 00:21:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.496 "adrfam": "ipv4", 00:21:11.496 "trsvcid": "$NVMF_PORT", 00:21:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.496 "hdgst": ${hdgst:-false}, 00:21:11.496 "ddgst": ${ddgst:-false} 00:21:11.496 }, 00:21:11.496 "method": "bdev_nvme_attach_controller" 00:21:11.496 } 00:21:11.496 EOF 00:21:11.496 )") 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.496 { 00:21:11.496 "params": { 00:21:11.496 "name": "Nvme$subsystem", 00:21:11.496 "trtype": "$TEST_TRANSPORT", 00:21:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.496 "adrfam": "ipv4", 00:21:11.496 "trsvcid": "$NVMF_PORT", 00:21:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.496 "hdgst": ${hdgst:-false}, 00:21:11.496 "ddgst": ${ddgst:-false} 00:21:11.496 }, 00:21:11.496 "method": "bdev_nvme_attach_controller" 00:21:11.496 } 00:21:11.496 EOF 00:21:11.496 )") 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.496 { 00:21:11.496 "params": { 00:21:11.496 "name": "Nvme$subsystem", 00:21:11.496 "trtype": "$TEST_TRANSPORT", 00:21:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.496 "adrfam": "ipv4", 00:21:11.496 "trsvcid": "$NVMF_PORT", 00:21:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.496 "hdgst": ${hdgst:-false}, 00:21:11.496 "ddgst": ${ddgst:-false} 00:21:11.496 }, 00:21:11.496 "method": "bdev_nvme_attach_controller" 00:21:11.496 } 00:21:11.496 EOF 00:21:11.496 )") 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.496 { 00:21:11.496 "params": { 00:21:11.496 "name": "Nvme$subsystem", 00:21:11.496 "trtype": "$TEST_TRANSPORT", 00:21:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.496 "adrfam": "ipv4", 00:21:11.496 "trsvcid": "$NVMF_PORT", 00:21:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.496 "hdgst": ${hdgst:-false}, 00:21:11.496 "ddgst": ${ddgst:-false} 00:21:11.496 }, 00:21:11.496 "method": "bdev_nvme_attach_controller" 00:21:11.496 } 00:21:11.496 EOF 00:21:11.496 )") 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.496 { 00:21:11.496 "params": { 00:21:11.496 "name": "Nvme$subsystem", 00:21:11.496 "trtype": "$TEST_TRANSPORT", 00:21:11.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.496 "adrfam": "ipv4", 00:21:11.496 "trsvcid": "$NVMF_PORT", 00:21:11.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.496 "hdgst": ${hdgst:-false}, 00:21:11.496 "ddgst": ${ddgst:-false} 00:21:11.496 }, 00:21:11.496 "method": "bdev_nvme_attach_controller" 00:21:11.496 } 00:21:11.496 EOF 00:21:11.496 )") 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.496 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.496 { 00:21:11.496 "params": { 00:21:11.497 "name": "Nvme$subsystem", 00:21:11.497 "trtype": "$TEST_TRANSPORT", 00:21:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "$NVMF_PORT", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.497 "hdgst": ${hdgst:-false}, 00:21:11.497 "ddgst": ${ddgst:-false} 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 } 00:21:11.497 EOF 00:21:11.497 )") 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.497 { 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme$subsystem", 00:21:11.497 "trtype": "$TEST_TRANSPORT", 00:21:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "$NVMF_PORT", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.497 "hdgst": ${hdgst:-false}, 00:21:11.497 "ddgst": ${ddgst:-false} 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 } 00:21:11.497 EOF 00:21:11.497 )") 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.497 { 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme$subsystem", 00:21:11.497 "trtype": "$TEST_TRANSPORT", 00:21:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "$NVMF_PORT", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.497 "hdgst": ${hdgst:-false}, 00:21:11.497 "ddgst": ${ddgst:-false} 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 } 00:21:11.497 EOF 00:21:11.497 )") 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:11.497 { 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme$subsystem", 00:21:11.497 "trtype": "$TEST_TRANSPORT", 00:21:11.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "$NVMF_PORT", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.497 "hdgst": ${hdgst:-false}, 00:21:11.497 "ddgst": ${ddgst:-false} 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 } 00:21:11.497 EOF 00:21:11.497 )") 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:11.497 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme1", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 },{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme2", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 },{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme3", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 },{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme4", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 },{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme5", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 },{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme6", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 },{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme7", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 },{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme8", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 },{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme9", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 },{ 00:21:11.497 "params": { 00:21:11.497 "name": "Nvme10", 00:21:11.497 "trtype": "tcp", 00:21:11.497 "traddr": "10.0.0.2", 00:21:11.497 "adrfam": "ipv4", 00:21:11.497 "trsvcid": "4420", 00:21:11.497 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:11.497 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:11.497 "hdgst": false, 00:21:11.497 "ddgst": false 00:21:11.497 }, 00:21:11.497 "method": "bdev_nvme_attach_controller" 00:21:11.497 }' 00:21:11.497 [2024-11-20 09:11:48.386754] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:11.497 [2024-11-20 09:11:48.386832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376054 ] 00:21:11.497 [2024-11-20 09:11:48.458749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.497 [2024-11-20 09:11:48.518945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.394 Running I/O for 10 seconds... 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:13.653 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3375879 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3375879 ']' 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3375879 00:21:13.926 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:21:13.927 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:13.927 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3375879 00:21:13.927 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:13.927 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:13.927 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3375879' 00:21:13.927 killing process with pid 3375879 00:21:13.927 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3375879 00:21:13.927 09:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3375879 00:21:13.927 [2024-11-20 09:11:50.822600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.822981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.823987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.824009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.824029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.824049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef640 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.927 [2024-11-20 09:11:50.825664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.825993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.826368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c7f20 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.827985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.928 [2024-11-20 09:11:50.828155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.828652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14efb30 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.829551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.829604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.829634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.829655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.829669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.829683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.829697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.829710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.829723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6e620 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.829795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.829816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.829831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.829845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.829858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.829871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.829884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.829897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.829909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb066f0 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.829977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.829998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.830013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.830026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.830039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.830052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.830065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.830084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.830098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafa6e0 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.830154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.830175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.830190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.830203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.830216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.830230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.830243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.929 [2024-11-20 09:11:50.830256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.830268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02890 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.832027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0850 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.832047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.929 [2024-11-20 09:11:50.832060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0850 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.832074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.929 [2024-11-20 09:11:50.832085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0850 is same with the state(6) to be set 00:21:13.929 [2024-11-20 09:11:50.832099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.929 [2024-11-20 09:11:50.832105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0850 is same with the state(6) to be set 00:21:13.930 [2024-11-20 09:11:50.832115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.832976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.832994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.833010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.833024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.833039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.833053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.833069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.833082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.833098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.833111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.833126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.833140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.833156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.833169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.833200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.833214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.930 [2024-11-20 09:11:50.833229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.930 [2024-11-20 09:11:50.833243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with [2024-11-20 09:11:50.833428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:13.931 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:1[2024-11-20 09:11:50.833479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.833493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-11-20 09:11:50.833545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.833558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with [2024-11-20 09:11:50.833613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1the state(6) to be set 00:21:13.931 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with [2024-11-20 09:11:50.833724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1the state(6) to be set 00:21:13.931 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:12[2024-11-20 09:11:50.833784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with [2024-11-20 09:11:50.833799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:13.931 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.931 [2024-11-20 09:11:50.833830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.931 [2024-11-20 09:11:50.833841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.931 [2024-11-20 09:11:50.833849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.833853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.833863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.833865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.833878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with [2024-11-20 09:11:50.833877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:12the state(6) to be set 00:21:13.932 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.833891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.833893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.833902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.833908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.833914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.833921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.833926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.833937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128[2024-11-20 09:11:50.833938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.833952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with [2024-11-20 09:11:50.833952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:13.932 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.833965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.833969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.833982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.833984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.932 [2024-11-20 09:11:50.834133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0d20 is same with the state(6) to be set 00:21:13.932 [2024-11-20 09:11:50.834475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.932 [2024-11-20 09:11:50.834954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.932 [2024-11-20 09:11:50.834968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.834982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.834996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with [2024-11-20 09:11:50.835770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:13.933 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with [2024-11-20 09:11:50.835881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128the state(6) to be set 00:21:13.933 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.933 [2024-11-20 09:11:50.835939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.933 [2024-11-20 09:11:50.835948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.933 [2024-11-20 09:11:50.835951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.835963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.835964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.835978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.835980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.835990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.835994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-11-20 09:11:50.836056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with [2024-11-20 09:11:50.836070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:13.934 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with [2024-11-20 09:11:50.836120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12the state(6) to be set 00:21:13.934 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:12[2024-11-20 09:11:50.836239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.836252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.836315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-11-20 09:11:50.836456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with [2024-11-20 09:11:50.836471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:13.934 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with [2024-11-20 09:11:50.836502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:13.934 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.934 [2024-11-20 09:11:50.836528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.934 [2024-11-20 09:11:50.836541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.934 [2024-11-20 09:11:50.836553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.836565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.836569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.935 [2024-11-20 09:11:50.836577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.836595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.836607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.836619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.836637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127fc90 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with [2024-11-20 09:11:50.837754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1the state(6) to be set 00:21:13.935 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with [2024-11-20 09:11:50.837773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:13.935 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with [2024-11-20 09:11:50.837850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1the state(6) to be set 00:21:13.935 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 [2024-11-20 09:11:50.837876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1[2024-11-20 09:11:50.837888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.837902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.935 the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.935 [2024-11-20 09:11:50.837919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.935 [2024-11-20 09:11:50.837928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.837933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 [2024-11-20 09:11:50.837940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.837949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.837952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.837962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.837964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.837977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.837979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.837988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.837998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.838000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.838024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.838036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.838061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with [2024-11-20 09:11:50.838065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:13.936 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 [2024-11-20 09:11:50.838096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.838109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 [2024-11-20 09:11:50.838122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.838134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 [2024-11-20 09:11:50.838146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-11-20 09:11:50.838159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 [2024-11-20 09:11:50.838180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.838193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.838205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.838230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 [2024-11-20 09:11:50.838256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.838269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:11:50.838280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.936 the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.936 [2024-11-20 09:11:50.838344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.838645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280160 is same with the state(6) to be set 00:21:13.936 [2024-11-20 09:11:50.839396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.839993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.840848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1280630 is same with the state(6) to be set 00:21:13.937 [2024-11-20 09:11:50.853827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.937 [2024-11-20 09:11:50.853927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.937 [2024-11-20 09:11:50.853945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.937 [2024-11-20 09:11:50.853961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.937 [2024-11-20 09:11:50.853975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.937 [2024-11-20 09:11:50.853991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.937 [2024-11-20 09:11:50.854005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.937 [2024-11-20 09:11:50.854020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.937 [2024-11-20 09:11:50.854034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.937 [2024-11-20 09:11:50.854049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.937 [2024-11-20 09:11:50.854063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.937 [2024-11-20 09:11:50.854079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.937 [2024-11-20 09:11:50.854092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.937 [2024-11-20 09:11:50.854107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.937 [2024-11-20 09:11:50.854123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.937 [2024-11-20 09:11:50.854139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.938 [2024-11-20 09:11:50.854705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.854784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:13.938 [2024-11-20 09:11:50.855545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.855575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.855592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.855605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.855620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.855633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.855646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.855659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.855672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4aab0 is same with the state(6) to be set 00:21:13.938 [2024-11-20 09:11:50.855714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6e620 (9): Bad file descriptor 00:21:13.938 [2024-11-20 09:11:50.855777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.855805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.855819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.855832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.855846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.855858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.855872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.855890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.855903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf26dd0 is same with the state(6) to be set 00:21:13.938 [2024-11-20 09:11:50.855932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb066f0 (9): Bad file descriptor 00:21:13.938 [2024-11-20 09:11:50.855984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.856005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.856020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.856032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.856046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.856058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.856072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.856085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.856097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30150 is same with the state(6) to be set 00:21:13.938 [2024-11-20 09:11:50.856148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.856168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.856183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.856196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.856210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.856222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.856236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.856249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.938 [2024-11-20 09:11:50.856261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43020 is same with the state(6) to be set 00:21:13.938 [2024-11-20 09:11:50.856300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafa6e0 (9): Bad file descriptor 00:21:13.938 [2024-11-20 09:11:50.856362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.938 [2024-11-20 09:11:50.856383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.856397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.939 [2024-11-20 09:11:50.856420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.856435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.939 [2024-11-20 09:11:50.856448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.856462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.939 [2024-11-20 09:11:50.856474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.856486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03be0 is same with the state(6) to be set 00:21:13.939 [2024-11-20 09:11:50.856509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02890 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.856553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.939 [2024-11-20 09:11:50.856573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.856595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.939 [2024-11-20 09:11:50.856608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.856621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.939 [2024-11-20 09:11:50.856634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.856648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.939 [2024-11-20 09:11:50.856660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.856672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6e110 is same with the state(6) to be set 00:21:13.939 [2024-11-20 09:11:50.860662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:13.939 [2024-11-20 09:11:50.860713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:13.939 [2024-11-20 09:11:50.860734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:13.939 [2024-11-20 09:11:50.860758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30150 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.860791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03be0 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.861489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.939 [2024-11-20 09:11:50.861520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb02890 with addr=10.0.0.2, port=4420 00:21:13.939 [2024-11-20 09:11:50.861538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02890 is same with the state(6) to be set 00:21:13.939 [2024-11-20 09:11:50.862857] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:13.939 [2024-11-20 09:11:50.862974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.939 [2024-11-20 09:11:50.863002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03be0 with addr=10.0.0.2, port=4420 00:21:13.939 [2024-11-20 09:11:50.863027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03be0 is same with the state(6) to be set 00:21:13.939 [2024-11-20 09:11:50.863130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.939 [2024-11-20 09:11:50.863156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30150 with addr=10.0.0.2, port=4420 00:21:13.939 [2024-11-20 09:11:50.863173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30150 is same with the state(6) to be set 00:21:13.939 [2024-11-20 09:11:50.863192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02890 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.863258] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:13.939 [2024-11-20 09:11:50.863341] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:13.939 [2024-11-20 09:11:50.863442] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:13.939 [2024-11-20 09:11:50.863520] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:13.939 [2024-11-20 09:11:50.863592] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:13.939 [2024-11-20 09:11:50.863669] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:13.939 [2024-11-20 09:11:50.863723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03be0 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.863755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30150 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.863778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:13.939 [2024-11-20 09:11:50.863791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:13.939 [2024-11-20 09:11:50.863807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:13.939 [2024-11-20 09:11:50.863824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:13.939 [2024-11-20 09:11:50.863969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:13.939 [2024-11-20 09:11:50.863989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:13.939 [2024-11-20 09:11:50.864003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:13.939 [2024-11-20 09:11:50.864015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:13.939 [2024-11-20 09:11:50.864030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:13.939 [2024-11-20 09:11:50.864044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:13.939 [2024-11-20 09:11:50.864056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:13.939 [2024-11-20 09:11:50.864068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:13.939 [2024-11-20 09:11:50.865461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4aab0 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.865519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf26dd0 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.865562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43020 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.865604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6e110 (9): Bad file descriptor 00:21:13.939 [2024-11-20 09:11:50.865758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.865783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.865819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.865835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.865851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.865865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.865881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.865895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.865910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.865923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.865939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.865952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.865968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.865981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.865996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.866010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.866027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.866041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.866055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.866069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.866085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.866098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.866114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.939 [2024-11-20 09:11:50.866127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.939 [2024-11-20 09:11:50.866143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.866976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.866989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.867005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.867018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.867033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.867046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.867061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.867074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.867090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.867103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.867119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.867132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.940 [2024-11-20 09:11:50.867147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.940 [2024-11-20 09:11:50.867160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.867665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.867679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff36b0 is same with the state(6) to be set 00:21:13.941 [2024-11-20 09:11:50.868979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.941 [2024-11-20 09:11:50.869566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.941 [2024-11-20 09:11:50.869580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.869975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.869995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.942 [2024-11-20 09:11:50.870708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.942 [2024-11-20 09:11:50.870722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.870738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.870752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.870767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.870781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.870796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.870810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.870825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.870838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.870858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.870872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.870887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.870901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.870914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff4970 is same with the state(6) to be set 00:21:13.943 [2024-11-20 09:11:50.872217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.872982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.872995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.873010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.873024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.873039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.873053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.873068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.873081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.873097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.873110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.873126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.943 [2024-11-20 09:11:50.873139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.943 [2024-11-20 09:11:50.873155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.873976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.873991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.874004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.874023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.874037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.874053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.874067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.874082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.874095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.874111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.944 [2024-11-20 09:11:50.874124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.944 [2024-11-20 09:11:50.883319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd467c0 is same with the state(6) to be set 00:21:13.944 [2024-11-20 09:11:50.884668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:13.944 [2024-11-20 09:11:50.884704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:13.944 [2024-11-20 09:11:50.884723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:13.944 [2024-11-20 09:11:50.885199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.944 [2024-11-20 09:11:50.885233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb066f0 with addr=10.0.0.2, port=4420 00:21:13.944 [2024-11-20 09:11:50.885251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb066f0 is same with the state(6) to be set 00:21:13.944 [2024-11-20 09:11:50.885351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.944 [2024-11-20 09:11:50.885387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xafa6e0 with addr=10.0.0.2, port=4420 00:21:13.944 [2024-11-20 09:11:50.885403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafa6e0 is same with the state(6) to be set 00:21:13.944 [2024-11-20 09:11:50.885487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.944 [2024-11-20 09:11:50.885511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6e620 with addr=10.0.0.2, port=4420 00:21:13.945 [2024-11-20 09:11:50.885526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6e620 is same with the state(6) to be set 00:21:13.945 [2024-11-20 09:11:50.886142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.886970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.886986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.887000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.887019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.887033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.887049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.887062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.887077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.887091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.887106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.887120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.887138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.945 [2024-11-20 09:11:50.887151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.945 [2024-11-20 09:11:50.887166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.887982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.887995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.888011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.888025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.888040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.888054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.888069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.888082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.888096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf08c80 is same with the state(6) to be set 00:21:13.946 [2024-11-20 09:11:50.889384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.889407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.889427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.889447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.889465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.889480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.889495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.889509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.889524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.889537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.946 [2024-11-20 09:11:50.889553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.946 [2024-11-20 09:11:50.889567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.889982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.889997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.947 [2024-11-20 09:11:50.890680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.947 [2024-11-20 09:11:50.890693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.890972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.890985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.891270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.891288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0a220 is same with the state(6) to be set 00:21:13.948 [2024-11-20 09:11:50.892547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.892979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.892992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.893008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.893021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.893036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.948 [2024-11-20 09:11:50.893050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.948 [2024-11-20 09:11:50.893065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.893975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.893989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.894003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.894016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.894032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.894049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.894065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.894078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.894093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.894107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.894122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.894135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.894151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.894164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.894179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.949 [2024-11-20 09:11:50.894192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.949 [2024-11-20 09:11:50.894207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.894220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.894236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.894250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.894265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.894279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.894294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.894320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.894336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.894350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.894365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.894379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.894394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.894407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.894426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.894441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.894456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.894470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.894483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0b7c0 is same with the state(6) to be set 00:21:13.950 [2024-11-20 09:11:50.895755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.895779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.895799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.895814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.895829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.895844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.895859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.895873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.895888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.895902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.895917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.895931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.895945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.895959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.895974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.895987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.950 [2024-11-20 09:11:50.896475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.950 [2024-11-20 09:11:50.896490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.896963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.896985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.951 [2024-11-20 09:11:50.897605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.951 [2024-11-20 09:11:50.897620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.952 [2024-11-20 09:11:50.897633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.952 [2024-11-20 09:11:50.897649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.952 [2024-11-20 09:11:50.897662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.952 [2024-11-20 09:11:50.897676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0cc30 is same with the state(6) to be set 00:21:13.952 [2024-11-20 09:11:50.899582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:13.952 [2024-11-20 09:11:50.899619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:13.952 [2024-11-20 09:11:50.899640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:13.952 [2024-11-20 09:11:50.899659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:13.952 [2024-11-20 09:11:50.899676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:13.952 [2024-11-20 09:11:50.899756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb066f0 (9): Bad file descriptor 00:21:13.952 [2024-11-20 09:11:50.899782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafa6e0 (9): Bad file descriptor 00:21:13.952 [2024-11-20 09:11:50.899802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6e620 (9): Bad file descriptor 00:21:13.952 [2024-11-20 09:11:50.899850] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:21:13.952 [2024-11-20 09:11:50.899874] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:13.952 [2024-11-20 09:11:50.899896] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:13.952 [2024-11-20 09:11:50.899916] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:13.952 [2024-11-20 09:11:50.899935] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:13.952 [2024-11-20 09:11:50.900042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:13.952 task offset: 25856 on job bdev=Nvme3n1 fails 00:21:13.952 00:21:13.952 Latency(us) 00:21:13.952 [2024-11-20T08:11:50.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.952 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme1n1 ended in about 0.92 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme1n1 : 0.92 138.94 8.68 69.47 0.00 303729.78 20486.07 265639.25 00:21:13.952 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme2n1 ended in about 0.92 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme2n1 : 0.92 138.47 8.65 69.23 0.00 298741.76 20097.71 260978.92 00:21:13.952 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme3n1 ended in about 0.91 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme3n1 : 0.91 215.29 13.46 70.30 0.00 212562.38 17087.91 257872.02 00:21:13.952 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme4n1 ended in about 0.91 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme4n1 : 0.91 210.63 13.16 70.21 0.00 211460.74 21262.79 268746.15 00:21:13.952 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme5n1 ended in about 0.91 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme5n1 : 0.91 210.38 13.15 70.13 0.00 207007.29 25049.32 246997.90 00:21:13.952 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme6n1 ended in about 0.94 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme6n1 : 0.94 135.94 8.50 67.97 0.00 279570.96 19126.80 274959.93 00:21:13.952 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme7n1 ended in about 0.94 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme7n1 : 0.94 135.48 8.47 67.74 0.00 274509.87 20291.89 250104.79 00:21:13.952 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme8n1 ended in about 0.95 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme8n1 : 0.95 135.03 8.44 67.52 0.00 268883.44 18932.62 296708.17 00:21:13.952 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme9n1 ended in about 0.95 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme9n1 : 0.95 134.58 8.41 67.29 0.00 263836.00 26796.94 276513.37 00:21:13.952 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:13.952 Job: Nvme10n1 ended in about 0.94 seconds with error 00:21:13.952 Verification LBA range: start 0x0 length 0x400 00:21:13.952 Nvme10n1 : 0.94 136.62 8.54 68.31 0.00 253092.85 21456.97 292047.83 00:21:13.952 [2024-11-20T08:11:50.981Z] =================================================================================================================== 00:21:13.952 [2024-11-20T08:11:50.981Z] Total : 1591.36 99.46 688.17 0.00 252990.57 17087.91 296708.17 00:21:13.952 [2024-11-20 09:11:50.926998] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:13.952 [2024-11-20 09:11:50.927089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:13.952 [2024-11-20 09:11:50.927369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.952 [2024-11-20 09:11:50.927407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb02890 with addr=10.0.0.2, port=4420 00:21:13.952 [2024-11-20 09:11:50.927426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb02890 is same with the state(6) to be set 00:21:13.952 [2024-11-20 09:11:50.927528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.952 [2024-11-20 09:11:50.927553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf30150 with addr=10.0.0.2, port=4420 00:21:13.952 [2024-11-20 09:11:50.927568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf30150 is same with the state(6) to be set 00:21:13.952 [2024-11-20 09:11:50.927656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.952 [2024-11-20 09:11:50.927680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb03be0 with addr=10.0.0.2, port=4420 00:21:13.952 [2024-11-20 09:11:50.927695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb03be0 is same with the state(6) to be set 00:21:13.952 [2024-11-20 09:11:50.927797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.952 [2024-11-20 09:11:50.927821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6e110 with addr=10.0.0.2, port=4420 00:21:13.952 [2024-11-20 09:11:50.927836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6e110 is same with the state(6) to be set 00:21:13.952 [2024-11-20 09:11:50.927938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.952 [2024-11-20 09:11:50.927961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf43020 with addr=10.0.0.2, port=4420 00:21:13.952 [2024-11-20 09:11:50.927976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf43020 is same with the state(6) to be set 00:21:13.952 [2024-11-20 09:11:50.927992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:13.952 [2024-11-20 09:11:50.928005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:13.952 [2024-11-20 09:11:50.928023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:13.952 [2024-11-20 09:11:50.928042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:13.952 [2024-11-20 09:11:50.928059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:13.952 [2024-11-20 09:11:50.928071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:13.952 [2024-11-20 09:11:50.928084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:13.952 [2024-11-20 09:11:50.928097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:13.952 [2024-11-20 09:11:50.928111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:13.952 [2024-11-20 09:11:50.928123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:13.952 [2024-11-20 09:11:50.928136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:13.952 [2024-11-20 09:11:50.928148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:13.952 [2024-11-20 09:11:50.929518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.952 [2024-11-20 09:11:50.929549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf26dd0 with addr=10.0.0.2, port=4420 00:21:13.952 [2024-11-20 09:11:50.929566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf26dd0 is same with the state(6) to be set 00:21:13.952 [2024-11-20 09:11:50.929653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.952 [2024-11-20 09:11:50.929677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4aab0 with addr=10.0.0.2, port=4420 00:21:13.952 [2024-11-20 09:11:50.929692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4aab0 is same with the state(6) to be set 00:21:13.952 [2024-11-20 09:11:50.929717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb02890 (9): Bad file descriptor 00:21:13.952 [2024-11-20 09:11:50.929741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf30150 (9): Bad file descriptor 00:21:13.952 [2024-11-20 09:11:50.929759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03be0 (9): Bad file descriptor 00:21:13.953 [2024-11-20 09:11:50.929776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6e110 (9): Bad file descriptor 00:21:13.953 [2024-11-20 09:11:50.929793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf43020 (9): Bad file descriptor 00:21:13.953 [2024-11-20 09:11:50.929880] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:13.953 [2024-11-20 09:11:50.929904] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:13.953 [2024-11-20 09:11:50.929923] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:13.953 [2024-11-20 09:11:50.929944] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:13.953 [2024-11-20 09:11:50.929962] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:13.953 [2024-11-20 09:11:50.930070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf26dd0 (9): Bad file descriptor 00:21:13.953 [2024-11-20 09:11:50.930096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4aab0 (9): Bad file descriptor 00:21:13.953 [2024-11-20 09:11:50.930113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.930126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.930139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.930151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:13.953 [2024-11-20 09:11:50.930165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.930177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.930189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.930201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:13.953 [2024-11-20 09:11:50.930214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.930226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.930238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.930250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:13.953 [2024-11-20 09:11:50.930263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.930275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.930287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.930298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:13.953 [2024-11-20 09:11:50.930320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.930333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.930345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.930357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:13.953 [2024-11-20 09:11:50.930443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:13.953 [2024-11-20 09:11:50.930468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:13.953 [2024-11-20 09:11:50.930484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:13.953 [2024-11-20 09:11:50.930523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.930540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.930553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.930566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:13.953 [2024-11-20 09:11:50.930579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.930591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.930604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.930615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:13.953 [2024-11-20 09:11:50.930722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.953 [2024-11-20 09:11:50.930749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6e620 with addr=10.0.0.2, port=4420 00:21:13.953 [2024-11-20 09:11:50.930764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6e620 is same with the state(6) to be set 00:21:13.953 [2024-11-20 09:11:50.930840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.953 [2024-11-20 09:11:50.930864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xafa6e0 with addr=10.0.0.2, port=4420 00:21:13.953 [2024-11-20 09:11:50.930879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafa6e0 is same with the state(6) to be set 00:21:13.953 [2024-11-20 09:11:50.930969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:13.953 [2024-11-20 09:11:50.930994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb066f0 with addr=10.0.0.2, port=4420 00:21:13.953 [2024-11-20 09:11:50.931009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb066f0 is same with the state(6) to be set 00:21:13.953 [2024-11-20 09:11:50.931056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6e620 (9): Bad file descriptor 00:21:13.953 [2024-11-20 09:11:50.931080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafa6e0 (9): Bad file descriptor 00:21:13.953 [2024-11-20 09:11:50.931099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb066f0 (9): Bad file descriptor 00:21:13.953 [2024-11-20 09:11:50.931136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.931154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.931167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.931181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:13.953 [2024-11-20 09:11:50.931195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.931207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.931220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.931237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:13.953 [2024-11-20 09:11:50.931251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:13.953 [2024-11-20 09:11:50.931264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:13.953 [2024-11-20 09:11:50.931277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:13.953 [2024-11-20 09:11:50.931291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:14.519 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3376054 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3376054 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3376054 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.454 rmmod nvme_tcp 00:21:15.454 rmmod nvme_fabrics 00:21:15.454 rmmod nvme_keyring 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3375879 ']' 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3375879 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3375879 ']' 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3375879 00:21:15.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3375879) - No such process 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3375879 is not found' 00:21:15.454 Process with pid 3375879 is not found 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:15.454 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.455 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:15.455 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.455 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.455 09:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:17.993 00:21:17.993 real 0m7.268s 00:21:17.993 user 0m17.569s 00:21:17.993 sys 0m1.412s 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.993 ************************************ 00:21:17.993 END TEST nvmf_shutdown_tc3 00:21:17.993 ************************************ 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:17.993 ************************************ 00:21:17.993 START TEST nvmf_shutdown_tc4 00:21:17.993 ************************************ 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:17.993 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:17.994 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:17.994 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:17.994 Found net devices under 0000:09:00.0: cvl_0_0 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:17.994 Found net devices under 0000:09:00.1: cvl_0_1 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:17.994 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:17.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:21:17.995 00:21:17.995 --- 10.0.0.2 ping statistics --- 00:21:17.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.995 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:21:17.995 00:21:17.995 --- 10.0.0.1 ping statistics --- 00:21:17.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.995 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3376845 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3376845 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3376845 ']' 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:17.995 09:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:17.995 [2024-11-20 09:11:54.797650] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:17.995 [2024-11-20 09:11:54.797744] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.995 [2024-11-20 09:11:54.872939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.995 [2024-11-20 09:11:54.934027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.995 [2024-11-20 09:11:54.934078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.995 [2024-11-20 09:11:54.934106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.995 [2024-11-20 09:11:54.934117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.995 [2024-11-20 09:11:54.934126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.995 [2024-11-20 09:11:54.935683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.995 [2024-11-20 09:11:54.935796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.995 [2024-11-20 09:11:54.935928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:17.995 [2024-11-20 09:11:54.935932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.253 [2024-11-20 09:11:55.089346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.253 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.254 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.254 Malloc1 00:21:18.254 [2024-11-20 09:11:55.196685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.254 Malloc2 00:21:18.254 Malloc3 00:21:18.512 Malloc4 00:21:18.512 Malloc5 00:21:18.512 Malloc6 00:21:18.512 Malloc7 00:21:18.512 Malloc8 00:21:18.770 Malloc9 00:21:18.770 Malloc10 00:21:18.770 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.770 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:18.770 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.770 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.770 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3377020 00:21:18.770 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:18.770 09:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:18.770 [2024-11-20 09:11:55.714144] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3376845 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3376845 ']' 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3376845 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3376845 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3376845' 00:21:24.039 killing process with pid 3376845 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3376845 00:21:24.039 09:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3376845 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 starting I/O failed: -6 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 starting I/O failed: -6 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 starting I/O failed: -6 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 starting I/O failed: -6 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 starting I/O failed: -6 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 starting I/O failed: -6 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.039 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 [2024-11-20 09:12:00.708810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 [2024-11-20 09:12:00.709999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 [2024-11-20 09:12:00.710789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ceec0 is same with Write completed with error (sct=0, sc=8) 00:21:24.040 the state(6) to be set 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 [2024-11-20 09:12:00.710847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ceec0 is same with the state(6) to be set 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 [2024-11-20 09:12:00.710878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ceec0 is same with Write completed with error (sct=0, sc=8) 00:21:24.040 the state(6) to be set 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 [2024-11-20 09:12:00.710904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ceec0 is same with starting I/O failed: -6 00:21:24.040 the state(6) to be set 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 [2024-11-20 09:12:00.710933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ceec0 is same with the state(6) to be set 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 [2024-11-20 09:12:00.710956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ceec0 is same with the state(6) to be set 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 [2024-11-20 09:12:00.710981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ceec0 is same with the state(6) to be set 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 [2024-11-20 09:12:00.711004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ceec0 is same with Write completed with error (sct=0, sc=8) 00:21:24.040 the state(6) to be set 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 [2024-11-20 09:12:00.711032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ceec0 is same with the state(6) to be set 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 [2024-11-20 09:12:00.711160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.040 starting I/O failed: -6 00:21:24.040 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 [2024-11-20 09:12:00.711708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with the state(6) to be set 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 [2024-11-20 09:12:00.711744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with the state(6) to be set 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.711760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with the state(6) to be set 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.711772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with the state(6) to be set 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 [2024-11-20 09:12:00.711786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with the state(6) to be set 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.711799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with the state(6) to be set 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 [2024-11-20 09:12:00.711811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with the state(6) to be set 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.711823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with the state(6) to be set 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 [2024-11-20 09:12:00.711836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with starting I/O failed: -6 00:21:24.041 the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.711849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with Write completed with error (sct=0, sc=8) 00:21:24.041 the state(6) to be set 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.711862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf3b0 is same with the state(6) to be set 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.712223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf880 is same with Write completed with error (sct=0, sc=8) 00:21:24.041 the state(6) to be set 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.712258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf880 is same with Write completed with error (sct=0, sc=8) 00:21:24.041 the state(6) to be set 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.712275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf880 is same with the state(6) to be set 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 [2024-11-20 09:12:00.712297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf880 is same with starting I/O failed: -6 00:21:24.041 the state(6) to be set 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 [2024-11-20 09:12:00.712342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf880 is same with the state(6) to be set 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.712357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cf880 is same with the state(6) to be set 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 [2024-11-20 09:12:00.712853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce9f0 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.712890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce9f0 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.712907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce9f0 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.712907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.041 [2024-11-20 09:12:00.712921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce9f0 is same with the state(6) to be set 00:21:24.041 NVMe io qpair process completion error 00:21:24.041 [2024-11-20 09:12:00.712938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce9f0 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.712950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce9f0 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.712961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce9f0 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.714373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce010 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.714403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce010 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.714417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce010 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.714428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce010 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.714441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce010 is same with the state(6) to be set 00:21:24.041 [2024-11-20 09:12:00.714452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce010 is same with the state(6) to be set 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.041 starting I/O failed: -6 00:21:24.041 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 [2024-11-20 09:12:00.717828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 [2024-11-20 09:12:00.718940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 [2024-11-20 09:12:00.720177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.042 Write completed with error (sct=0, sc=8) 00:21:24.042 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.720945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with Write completed with error (sct=0, sc=8) 00:21:24.043 the state(6) to be set 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 [2024-11-20 09:12:00.720982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with the state(6) to be set 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.720997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 [2024-11-20 09:12:00.721009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with the state(6) to be set 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 [2024-11-20 09:12:00.721047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with the state(6) to be set 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 [2024-11-20 09:12:00.721071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with the state(6) to be set 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182caa0 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 [2024-11-20 09:12:00.721495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with the state(6) to be set 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 [2024-11-20 09:12:00.721519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with starting I/O failed: -6 00:21:24.043 the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.721533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with Write completed with error (sct=0, sc=8) 00:21:24.043 the state(6) to be set 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.721558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.721570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182cf70 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 [2024-11-20 09:12:00.721899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.043 NVMe io qpair process completion error 00:21:24.043 [2024-11-20 09:12:00.722418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d10b0 is same with the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.722450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d10b0 is same with the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.722466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d10b0 is same with the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.722486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d10b0 is same with the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.722498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d10b0 is same with the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.722511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d10b0 is same with the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.722523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d10b0 is same with the state(6) to be set 00:21:24.043 [2024-11-20 09:12:00.722535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d10b0 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 [2024-11-20 09:12:00.725680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182eb00 is same with Write completed with error (sct=0, sc=8) 00:21:24.043 the state(6) to be set 00:21:24.043 starting I/O failed: -6 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 [2024-11-20 09:12:00.725716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182eb00 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.043 [2024-11-20 09:12:00.725741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182eb00 is same with the state(6) to be set 00:21:24.043 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.725766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182eb00 is same with the state(6) to be set 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.725807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182eb00 is same with the state(6) to be set 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.725832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182eb00 is same with the state(6) to be set 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.726354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182efd0 is same with the state(6) to be set 00:21:24.044 [2024-11-20 09:12:00.726370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.044 [2024-11-20 09:12:00.726386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182efd0 is same with the state(6) to be set 00:21:24.044 [2024-11-20 09:12:00.726417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182efd0 is same with the state(6) to be set 00:21:24.044 [2024-11-20 09:12:00.726430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182efd0 is same with the state(6) to be set 00:21:24.044 [2024-11-20 09:12:00.726442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182efd0 is same with the state(6) to be set 00:21:24.044 starting I/O failed: -6 00:21:24.044 [2024-11-20 09:12:00.726553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182f4a0 is same with Write completed with error (sct=0, sc=8) 00:21:24.044 the state(6) to be set 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.726585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182f4a0 is same with the state(6) to be set 00:21:24.044 [2024-11-20 09:12:00.726607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182f4a0 is same with the state(6) to be set 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.726619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182f4a0 is same with the state(6) to be set 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 [2024-11-20 09:12:00.726948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e630 is same with the state(6) to be set 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 [2024-11-20 09:12:00.726978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e630 is same with the state(6) to be set 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.726992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e630 is same with the state(6) to be set 00:21:24.044 [2024-11-20 09:12:00.727004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e630 is same with the state(6) to be set 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.727016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e630 is same with the state(6) to be set 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.727028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e630 is same with the state(6) to be set 00:21:24.044 starting I/O failed: -6 00:21:24.044 [2024-11-20 09:12:00.727040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e630 is same with the state(6) to be set 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 [2024-11-20 09:12:00.727052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182e630 is same with the state(6) to be set 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 [2024-11-20 09:12:00.727509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 starting I/O failed: -6 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.044 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 [2024-11-20 09:12:00.728695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.045 Write completed with error (sct=0, sc=8) 00:21:24.045 starting I/O failed: -6 00:21:24.046 [2024-11-20 09:12:00.730721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.046 NVMe io qpair process completion error 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 [2024-11-20 09:12:00.731975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 [2024-11-20 09:12:00.733068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.046 Write completed with error (sct=0, sc=8) 00:21:24.046 starting I/O failed: -6 00:21:24.047 [2024-11-20 09:12:00.734228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 [2024-11-20 09:12:00.735923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.047 NVMe io qpair process completion error 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 [2024-11-20 09:12:00.737290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 starting I/O failed: -6 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.047 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 [2024-11-20 09:12:00.738244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 [2024-11-20 09:12:00.739507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.048 starting I/O failed: -6 00:21:24.048 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 [2024-11-20 09:12:00.741730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.049 NVMe io qpair process completion error 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 [2024-11-20 09:12:00.743122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.049 starting I/O failed: -6 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 [2024-11-20 09:12:00.744247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.049 Write completed with error (sct=0, sc=8) 00:21:24.049 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 [2024-11-20 09:12:00.745419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.050 starting I/O failed: -6 00:21:24.050 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 [2024-11-20 09:12:00.749043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.051 NVMe io qpair process completion error 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 [2024-11-20 09:12:00.750361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 [2024-11-20 09:12:00.751453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 starting I/O failed: -6 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.051 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 [2024-11-20 09:12:00.752616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 [2024-11-20 09:12:00.755957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.052 NVMe io qpair process completion error 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.052 starting I/O failed: -6 00:21:24.052 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 [2024-11-20 09:12:00.757146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 [2024-11-20 09:12:00.758228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 [2024-11-20 09:12:00.759406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.053 starting I/O failed: -6 00:21:24.053 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 [2024-11-20 09:12:00.761812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.054 NVMe io qpair process completion error 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 [2024-11-20 09:12:00.763084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.054 starting I/O failed: -6 00:21:24.054 starting I/O failed: -6 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 [2024-11-20 09:12:00.764177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.054 Write completed with error (sct=0, sc=8) 00:21:24.054 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 [2024-11-20 09:12:00.765353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.055 Write completed with error (sct=0, sc=8) 00:21:24.055 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 [2024-11-20 09:12:00.767489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:24.056 NVMe io qpair process completion error 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 [2024-11-20 09:12:00.769467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.056 Write completed with error (sct=0, sc=8) 00:21:24.056 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 [2024-11-20 09:12:00.770708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.057 starting I/O failed: -6 00:21:24.057 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 Write completed with error (sct=0, sc=8) 00:21:24.058 starting I/O failed: -6 00:21:24.058 [2024-11-20 09:12:00.774449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.058 NVMe io qpair process completion error 00:21:24.058 Initializing NVMe Controllers 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:24.058 Controller IO queue size 128, less than required. 00:21:24.058 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:24.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:24.058 Initialization complete. Launching workers. 00:21:24.058 ======================================================== 00:21:24.058 Latency(us) 00:21:24.058 Device Information : IOPS MiB/s Average min max 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1782.85 76.61 71815.04 873.30 124017.67 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1914.73 82.27 66889.90 953.42 122738.32 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1873.45 80.50 68394.16 838.93 120827.61 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1853.89 79.66 69163.33 946.84 131103.30 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1803.71 77.50 71134.77 1098.39 118795.12 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1784.59 76.68 71084.76 962.86 118008.47 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1856.93 79.79 69068.57 816.43 116531.66 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1828.69 78.58 69389.70 1115.39 116965.28 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1778.94 76.44 72059.62 1062.28 135770.97 00:21:24.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1785.02 76.70 71100.09 930.03 119025.58 00:21:24.058 ======================================================== 00:21:24.058 Total : 18262.79 784.73 69971.86 816.43 135770.97 00:21:24.058 00:21:24.058 [2024-11-20 09:12:00.779677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a9ae0 is same with the state(6) to be set 00:21:24.058 [2024-11-20 09:12:00.779766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8920 is same with the state(6) to be set 00:21:24.058 [2024-11-20 09:12:00.779826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a82c0 is same with the state(6) to be set 00:21:24.058 [2024-11-20 09:12:00.779883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a79e0 is same with the state(6) to be set 00:21:24.058 [2024-11-20 09:12:00.779938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a76b0 is same with the state(6) to be set 00:21:24.058 [2024-11-20 09:12:00.779994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a9720 is same with the state(6) to be set 00:21:24.058 [2024-11-20 09:12:00.780051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a85f0 is same with the state(6) to be set 00:21:24.058 [2024-11-20 09:12:00.780107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a9900 is same with the state(6) to be set 00:21:24.058 [2024-11-20 09:12:00.780170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a8c50 is same with the state(6) to be set 00:21:24.059 [2024-11-20 09:12:00.780225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a7d10 is same with the state(6) to be set 00:21:24.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:24.317 09:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3377020 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3377020 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3377020 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.254 rmmod nvme_tcp 00:21:25.254 rmmod nvme_fabrics 00:21:25.254 rmmod nvme_keyring 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3376845 ']' 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3376845 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3376845 ']' 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3376845 00:21:25.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3376845) - No such process 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3376845 is not found' 00:21:25.254 Process with pid 3376845 is not found 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.254 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.255 09:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.791 00:21:27.791 real 0m9.755s 00:21:27.791 user 0m24.174s 00:21:27.791 sys 0m5.464s 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:27.791 ************************************ 00:21:27.791 END TEST nvmf_shutdown_tc4 00:21:27.791 ************************************ 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:27.791 00:21:27.791 real 0m37.353s 00:21:27.791 user 1m40.844s 00:21:27.791 sys 0m12.034s 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:27.791 ************************************ 00:21:27.791 END TEST nvmf_shutdown 00:21:27.791 ************************************ 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:27.791 ************************************ 00:21:27.791 START TEST nvmf_nsid 00:21:27.791 ************************************ 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:27.791 * Looking for test storage... 00:21:27.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.791 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.792 --rc genhtml_branch_coverage=1 00:21:27.792 --rc genhtml_function_coverage=1 00:21:27.792 --rc genhtml_legend=1 00:21:27.792 --rc geninfo_all_blocks=1 00:21:27.792 --rc geninfo_unexecuted_blocks=1 00:21:27.792 00:21:27.792 ' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.792 --rc genhtml_branch_coverage=1 00:21:27.792 --rc genhtml_function_coverage=1 00:21:27.792 --rc genhtml_legend=1 00:21:27.792 --rc geninfo_all_blocks=1 00:21:27.792 --rc geninfo_unexecuted_blocks=1 00:21:27.792 00:21:27.792 ' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.792 --rc genhtml_branch_coverage=1 00:21:27.792 --rc genhtml_function_coverage=1 00:21:27.792 --rc genhtml_legend=1 00:21:27.792 --rc geninfo_all_blocks=1 00:21:27.792 --rc geninfo_unexecuted_blocks=1 00:21:27.792 00:21:27.792 ' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.792 --rc genhtml_branch_coverage=1 00:21:27.792 --rc genhtml_function_coverage=1 00:21:27.792 --rc genhtml_legend=1 00:21:27.792 --rc geninfo_all_blocks=1 00:21:27.792 --rc geninfo_unexecuted_blocks=1 00:21:27.792 00:21:27.792 ' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:27.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:27.792 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.793 09:12:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:29.694 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:29.694 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:29.694 Found net devices under 0000:09:00.0: cvl_0_0 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:29.694 Found net devices under 0000:09:00.1: cvl_0_1 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.694 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.695 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.953 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:29.953 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.953 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.953 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.953 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:29.953 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:29.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:21:29.953 00:21:29.953 --- 10.0.0.2 ping statistics --- 00:21:29.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.953 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:21:29.954 00:21:29.954 --- 10.0.0.1 ping statistics --- 00:21:29.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.954 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3379867 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3379867 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3379867 ']' 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:29.954 09:12:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:29.954 [2024-11-20 09:12:06.861703] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:29.954 [2024-11-20 09:12:06.861789] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.954 [2024-11-20 09:12:06.934426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.212 [2024-11-20 09:12:06.994769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.212 [2024-11-20 09:12:06.994817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.212 [2024-11-20 09:12:06.994845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.212 [2024-11-20 09:12:06.994856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.212 [2024-11-20 09:12:06.994865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.212 [2024-11-20 09:12:06.995510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3379915 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=795677ce-aa52-49df-a5d3-648533c33cde 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b311770d-f878-431f-9ad8-6c674cfbae82 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d6afeb99-0a8a-415b-8053-bcbecc9fddbc 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.212 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.212 null0 00:21:30.212 null1 00:21:30.212 null2 00:21:30.212 [2024-11-20 09:12:07.179728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.212 [2024-11-20 09:12:07.194145] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:30.212 [2024-11-20 09:12:07.194220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379915 ] 00:21:30.212 [2024-11-20 09:12:07.203932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.470 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.470 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3379915 /var/tmp/tgt2.sock 00:21:30.470 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 3379915 ']' 00:21:30.470 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:30.470 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:30.470 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:30.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:30.470 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:30.470 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:30.470 [2024-11-20 09:12:07.265892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.470 [2024-11-20 09:12:07.324533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.728 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:30.728 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:30.728 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:30.985 [2024-11-20 09:12:07.999702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.243 [2024-11-20 09:12:08.015897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:31.243 nvme0n1 nvme0n2 00:21:31.243 nvme1n1 00:21:31.243 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:31.243 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:31.243 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:21:31.808 09:12:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 795677ce-aa52-49df-a5d3-648533c33cde 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=795677ceaa5249dfa5d3648533c33cde 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 795677CEAA5249DFA5D3648533C33CDE 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 795677CEAA5249DFA5D3648533C33CDE == \7\9\5\6\7\7\C\E\A\A\5\2\4\9\D\F\A\5\D\3\6\4\8\5\3\3\C\3\3\C\D\E ]] 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b311770d-f878-431f-9ad8-6c674cfbae82 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b311770df878431f9ad86c674cfbae82 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B311770DF878431F9AD86C674CFBAE82 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B311770DF878431F9AD86C674CFBAE82 == \B\3\1\1\7\7\0\D\F\8\7\8\4\3\1\F\9\A\D\8\6\C\6\7\4\C\F\B\A\E\8\2 ]] 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:32.739 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:21:32.740 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:32.740 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:21:32.740 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:32.740 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d6afeb99-0a8a-415b-8053-bcbecc9fddbc 00:21:32.740 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:32.740 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:32.740 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:32.740 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:32.740 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d6afeb990a8a415b8053bcbecc9fddbc 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D6AFEB990A8A415B8053BCBECC9FDDBC 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D6AFEB990A8A415B8053BCBECC9FDDBC == \D\6\A\F\E\B\9\9\0\A\8\A\4\1\5\B\8\0\5\3\B\C\B\E\C\C\9\F\D\D\B\C ]] 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3379915 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3379915 ']' 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3379915 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3379915 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3379915' 00:21:32.997 killing process with pid 3379915 00:21:32.997 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3379915 00:21:32.998 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3379915 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.570 rmmod nvme_tcp 00:21:33.570 rmmod nvme_fabrics 00:21:33.570 rmmod nvme_keyring 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3379867 ']' 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3379867 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 3379867 ']' 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 3379867 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3379867 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3379867' 00:21:33.570 killing process with pid 3379867 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 3379867 00:21:33.570 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 3379867 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.873 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.811 09:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.811 00:21:35.811 real 0m8.409s 00:21:35.811 user 0m8.245s 00:21:35.811 sys 0m2.695s 00:21:35.811 09:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:35.811 09:12:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:35.811 ************************************ 00:21:35.811 END TEST nvmf_nsid 00:21:35.811 ************************************ 00:21:35.811 09:12:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:35.811 00:21:35.811 real 11m48.902s 00:21:35.811 user 28m11.201s 00:21:35.811 sys 2m45.658s 00:21:35.811 09:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:35.811 09:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:35.811 ************************************ 00:21:35.811 END TEST nvmf_target_extra 00:21:35.811 ************************************ 00:21:36.070 09:12:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:36.070 09:12:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:36.070 09:12:12 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:36.070 09:12:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:36.070 ************************************ 00:21:36.070 START TEST nvmf_host 00:21:36.070 ************************************ 00:21:36.070 09:12:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:36.070 * Looking for test storage... 00:21:36.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:36.070 09:12:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:36.070 09:12:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:36.070 09:12:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:36.070 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:36.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.071 --rc genhtml_branch_coverage=1 00:21:36.071 --rc genhtml_function_coverage=1 00:21:36.071 --rc genhtml_legend=1 00:21:36.071 --rc geninfo_all_blocks=1 00:21:36.071 --rc geninfo_unexecuted_blocks=1 00:21:36.071 00:21:36.071 ' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:36.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.071 --rc genhtml_branch_coverage=1 00:21:36.071 --rc genhtml_function_coverage=1 00:21:36.071 --rc genhtml_legend=1 00:21:36.071 --rc geninfo_all_blocks=1 00:21:36.071 --rc geninfo_unexecuted_blocks=1 00:21:36.071 00:21:36.071 ' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:36.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.071 --rc genhtml_branch_coverage=1 00:21:36.071 --rc genhtml_function_coverage=1 00:21:36.071 --rc genhtml_legend=1 00:21:36.071 --rc geninfo_all_blocks=1 00:21:36.071 --rc geninfo_unexecuted_blocks=1 00:21:36.071 00:21:36.071 ' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:36.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.071 --rc genhtml_branch_coverage=1 00:21:36.071 --rc genhtml_function_coverage=1 00:21:36.071 --rc genhtml_legend=1 00:21:36.071 --rc geninfo_all_blocks=1 00:21:36.071 --rc geninfo_unexecuted_blocks=1 00:21:36.071 00:21:36.071 ' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.071 ************************************ 00:21:36.071 START TEST nvmf_multicontroller 00:21:36.071 ************************************ 00:21:36.071 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:36.331 * Looking for test storage... 00:21:36.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:36.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.331 --rc genhtml_branch_coverage=1 00:21:36.331 --rc genhtml_function_coverage=1 00:21:36.331 --rc genhtml_legend=1 00:21:36.331 --rc geninfo_all_blocks=1 00:21:36.331 --rc geninfo_unexecuted_blocks=1 00:21:36.331 00:21:36.331 ' 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:36.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.331 --rc genhtml_branch_coverage=1 00:21:36.331 --rc genhtml_function_coverage=1 00:21:36.331 --rc genhtml_legend=1 00:21:36.331 --rc geninfo_all_blocks=1 00:21:36.331 --rc geninfo_unexecuted_blocks=1 00:21:36.331 00:21:36.331 ' 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:36.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.331 --rc genhtml_branch_coverage=1 00:21:36.331 --rc genhtml_function_coverage=1 00:21:36.331 --rc genhtml_legend=1 00:21:36.331 --rc geninfo_all_blocks=1 00:21:36.331 --rc geninfo_unexecuted_blocks=1 00:21:36.331 00:21:36.331 ' 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:36.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.331 --rc genhtml_branch_coverage=1 00:21:36.331 --rc genhtml_function_coverage=1 00:21:36.331 --rc genhtml_legend=1 00:21:36.331 --rc geninfo_all_blocks=1 00:21:36.331 --rc geninfo_unexecuted_blocks=1 00:21:36.331 00:21:36.331 ' 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.331 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:36.332 09:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.860 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:38.861 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:38.861 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:38.861 Found net devices under 0000:09:00.0: cvl_0_0 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:38.861 Found net devices under 0000:09:00.1: cvl_0_1 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:38.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:21:38.861 00:21:38.861 --- 10.0.0.2 ping statistics --- 00:21:38.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.861 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:21:38.861 00:21:38.861 --- 10.0.0.1 ping statistics --- 00:21:38.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.861 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.861 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3382964 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3382964 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3382964 ']' 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.862 [2024-11-20 09:12:15.535164] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:38.862 [2024-11-20 09:12:15.535263] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.862 [2024-11-20 09:12:15.605867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:38.862 [2024-11-20 09:12:15.661330] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.862 [2024-11-20 09:12:15.661400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.862 [2024-11-20 09:12:15.661413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.862 [2024-11-20 09:12:15.661424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.862 [2024-11-20 09:12:15.661433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.862 [2024-11-20 09:12:15.662783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.862 [2024-11-20 09:12:15.662852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.862 [2024-11-20 09:12:15.662849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.862 [2024-11-20 09:12:15.810704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.862 Malloc0 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.862 [2024-11-20 09:12:15.871845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.862 [2024-11-20 09:12:15.879722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.862 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.120 Malloc1 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3382985 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3382985 /var/tmp/bdevperf.sock 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3382985 ']' 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:39.120 09:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.376 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:39.376 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:39.376 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:39.376 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.376 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.634 NVMe0n1 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.634 1 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.634 request: 00:21:39.634 { 00:21:39.634 "name": "NVMe0", 00:21:39.634 "trtype": "tcp", 00:21:39.634 "traddr": "10.0.0.2", 00:21:39.634 "adrfam": "ipv4", 00:21:39.634 "trsvcid": "4420", 00:21:39.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.634 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:39.634 "hostaddr": "10.0.0.1", 00:21:39.634 "prchk_reftag": false, 00:21:39.634 "prchk_guard": false, 00:21:39.634 "hdgst": false, 00:21:39.634 "ddgst": false, 00:21:39.634 "allow_unrecognized_csi": false, 00:21:39.634 "method": "bdev_nvme_attach_controller", 00:21:39.634 "req_id": 1 00:21:39.634 } 00:21:39.634 Got JSON-RPC error response 00:21:39.634 response: 00:21:39.634 { 00:21:39.634 "code": -114, 00:21:39.634 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:39.634 } 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.634 request: 00:21:39.634 { 00:21:39.634 "name": "NVMe0", 00:21:39.634 "trtype": "tcp", 00:21:39.634 "traddr": "10.0.0.2", 00:21:39.634 "adrfam": "ipv4", 00:21:39.634 "trsvcid": "4420", 00:21:39.634 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:39.634 "hostaddr": "10.0.0.1", 00:21:39.634 "prchk_reftag": false, 00:21:39.634 "prchk_guard": false, 00:21:39.634 "hdgst": false, 00:21:39.634 "ddgst": false, 00:21:39.634 "allow_unrecognized_csi": false, 00:21:39.634 "method": "bdev_nvme_attach_controller", 00:21:39.634 "req_id": 1 00:21:39.634 } 00:21:39.634 Got JSON-RPC error response 00:21:39.634 response: 00:21:39.634 { 00:21:39.634 "code": -114, 00:21:39.634 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:39.634 } 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.634 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.635 request: 00:21:39.635 { 00:21:39.635 "name": "NVMe0", 00:21:39.635 "trtype": "tcp", 00:21:39.635 "traddr": "10.0.0.2", 00:21:39.635 "adrfam": "ipv4", 00:21:39.635 "trsvcid": "4420", 00:21:39.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.635 "hostaddr": "10.0.0.1", 00:21:39.635 "prchk_reftag": false, 00:21:39.635 "prchk_guard": false, 00:21:39.635 "hdgst": false, 00:21:39.635 "ddgst": false, 00:21:39.635 "multipath": "disable", 00:21:39.635 "allow_unrecognized_csi": false, 00:21:39.635 "method": "bdev_nvme_attach_controller", 00:21:39.635 "req_id": 1 00:21:39.635 } 00:21:39.635 Got JSON-RPC error response 00:21:39.635 response: 00:21:39.635 { 00:21:39.635 "code": -114, 00:21:39.635 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:39.635 } 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.635 request: 00:21:39.635 { 00:21:39.635 "name": "NVMe0", 00:21:39.635 "trtype": "tcp", 00:21:39.635 "traddr": "10.0.0.2", 00:21:39.635 "adrfam": "ipv4", 00:21:39.635 "trsvcid": "4420", 00:21:39.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.635 "hostaddr": "10.0.0.1", 00:21:39.635 "prchk_reftag": false, 00:21:39.635 "prchk_guard": false, 00:21:39.635 "hdgst": false, 00:21:39.635 "ddgst": false, 00:21:39.635 "multipath": "failover", 00:21:39.635 "allow_unrecognized_csi": false, 00:21:39.635 "method": "bdev_nvme_attach_controller", 00:21:39.635 "req_id": 1 00:21:39.635 } 00:21:39.635 Got JSON-RPC error response 00:21:39.635 response: 00:21:39.635 { 00:21:39.635 "code": -114, 00:21:39.635 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:39.635 } 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.635 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.894 NVMe0n1 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.894 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:39.894 09:12:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:41.266 { 00:21:41.266 "results": [ 00:21:41.266 { 00:21:41.266 "job": "NVMe0n1", 00:21:41.266 "core_mask": "0x1", 00:21:41.266 "workload": "write", 00:21:41.266 "status": "finished", 00:21:41.266 "queue_depth": 128, 00:21:41.266 "io_size": 4096, 00:21:41.266 "runtime": 1.006372, 00:21:41.266 "iops": 18490.18056941171, 00:21:41.266 "mibps": 72.22726784926449, 00:21:41.266 "io_failed": 0, 00:21:41.266 "io_timeout": 0, 00:21:41.266 "avg_latency_us": 6904.571018757366, 00:21:41.266 "min_latency_us": 3932.16, 00:21:41.266 "max_latency_us": 13592.651851851851 00:21:41.266 } 00:21:41.266 ], 00:21:41.266 "core_count": 1 00:21:41.266 } 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3382985 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3382985 ']' 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3382985 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3382985 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3382985' 00:21:41.266 killing process with pid 3382985 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3382985 00:21:41.266 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3382985 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:21:41.524 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:21:41.524 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:41.524 [2024-11-20 09:12:15.988227] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:41.524 [2024-11-20 09:12:15.988331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382985 ] 00:21:41.525 [2024-11-20 09:12:16.060961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.525 [2024-11-20 09:12:16.122010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.525 [2024-11-20 09:12:16.873852] bdev.c:4708:bdev_name_add: *ERROR*: Bdev name 71bb6154-80ed-41f8-8993-762ac127c54c already exists 00:21:41.525 [2024-11-20 09:12:16.873891] bdev.c:7917:bdev_register: *ERROR*: Unable to add uuid:71bb6154-80ed-41f8-8993-762ac127c54c alias for bdev NVMe1n1 00:21:41.525 [2024-11-20 09:12:16.873906] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:41.525 Running I/O for 1 seconds... 00:21:41.525 18416.00 IOPS, 71.94 MiB/s 00:21:41.525 Latency(us) 00:21:41.525 [2024-11-20T08:12:18.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.525 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:41.525 NVMe0n1 : 1.01 18490.18 72.23 0.00 0.00 6904.57 3932.16 13592.65 00:21:41.525 [2024-11-20T08:12:18.554Z] =================================================================================================================== 00:21:41.525 [2024-11-20T08:12:18.554Z] Total : 18490.18 72.23 0.00 0.00 6904.57 3932.16 13592.65 00:21:41.525 Received shutdown signal, test time was about 1.000000 seconds 00:21:41.525 00:21:41.525 Latency(us) 00:21:41.525 [2024-11-20T08:12:18.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.525 [2024-11-20T08:12:18.554Z] =================================================================================================================== 00:21:41.525 [2024-11-20T08:12:18.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.525 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.525 rmmod nvme_tcp 00:21:41.525 rmmod nvme_fabrics 00:21:41.525 rmmod nvme_keyring 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3382964 ']' 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3382964 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3382964 ']' 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3382964 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3382964 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3382964' 00:21:41.525 killing process with pid 3382964 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3382964 00:21:41.525 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3382964 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.784 09:12:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.702 09:12:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.702 00:21:43.702 real 0m7.652s 00:21:43.702 user 0m12.210s 00:21:43.702 sys 0m2.354s 00:21:43.702 09:12:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:43.702 09:12:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.702 ************************************ 00:21:43.702 END TEST nvmf_multicontroller 00:21:43.702 ************************************ 00:21:43.960 09:12:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:43.960 09:12:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:43.960 09:12:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:43.960 09:12:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.960 ************************************ 00:21:43.960 START TEST nvmf_aer 00:21:43.960 ************************************ 00:21:43.960 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:43.960 * Looking for test storage... 00:21:43.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.960 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:43.960 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:21:43.960 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:43.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.961 --rc genhtml_branch_coverage=1 00:21:43.961 --rc genhtml_function_coverage=1 00:21:43.961 --rc genhtml_legend=1 00:21:43.961 --rc geninfo_all_blocks=1 00:21:43.961 --rc geninfo_unexecuted_blocks=1 00:21:43.961 00:21:43.961 ' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:43.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.961 --rc genhtml_branch_coverage=1 00:21:43.961 --rc genhtml_function_coverage=1 00:21:43.961 --rc genhtml_legend=1 00:21:43.961 --rc geninfo_all_blocks=1 00:21:43.961 --rc geninfo_unexecuted_blocks=1 00:21:43.961 00:21:43.961 ' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:43.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.961 --rc genhtml_branch_coverage=1 00:21:43.961 --rc genhtml_function_coverage=1 00:21:43.961 --rc genhtml_legend=1 00:21:43.961 --rc geninfo_all_blocks=1 00:21:43.961 --rc geninfo_unexecuted_blocks=1 00:21:43.961 00:21:43.961 ' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:43.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.961 --rc genhtml_branch_coverage=1 00:21:43.961 --rc genhtml_function_coverage=1 00:21:43.961 --rc genhtml_legend=1 00:21:43.961 --rc geninfo_all_blocks=1 00:21:43.961 --rc geninfo_unexecuted_blocks=1 00:21:43.961 00:21:43.961 ' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.961 09:12:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.495 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:46.496 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:46.496 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:46.496 Found net devices under 0000:09:00.0: cvl_0_0 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:46.496 Found net devices under 0000:09:00.1: cvl_0_1 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:21:46.496 00:21:46.496 --- 10.0.0.2 ping statistics --- 00:21:46.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.496 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:21:46.496 00:21:46.496 --- 10.0.0.1 ping statistics --- 00:21:46.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.496 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3385218 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3385218 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3385218 ']' 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:46.496 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.496 [2024-11-20 09:12:23.283783] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:46.496 [2024-11-20 09:12:23.283867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.496 [2024-11-20 09:12:23.354740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.496 [2024-11-20 09:12:23.415459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.496 [2024-11-20 09:12:23.415509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.496 [2024-11-20 09:12:23.415539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.496 [2024-11-20 09:12:23.415551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.496 [2024-11-20 09:12:23.415561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.496 [2024-11-20 09:12:23.417339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.496 [2024-11-20 09:12:23.417395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.496 [2024-11-20 09:12:23.417443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.496 [2024-11-20 09:12:23.417446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.755 [2024-11-20 09:12:23.569397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.755 Malloc0 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.755 [2024-11-20 09:12:23.643045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.755 [ 00:21:46.755 { 00:21:46.755 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:46.755 "subtype": "Discovery", 00:21:46.755 "listen_addresses": [], 00:21:46.755 "allow_any_host": true, 00:21:46.755 "hosts": [] 00:21:46.755 }, 00:21:46.755 { 00:21:46.755 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.755 "subtype": "NVMe", 00:21:46.755 "listen_addresses": [ 00:21:46.755 { 00:21:46.755 "trtype": "TCP", 00:21:46.755 "adrfam": "IPv4", 00:21:46.755 "traddr": "10.0.0.2", 00:21:46.755 "trsvcid": "4420" 00:21:46.755 } 00:21:46.755 ], 00:21:46.755 "allow_any_host": true, 00:21:46.755 "hosts": [], 00:21:46.755 "serial_number": "SPDK00000000000001", 00:21:46.755 "model_number": "SPDK bdev Controller", 00:21:46.755 "max_namespaces": 2, 00:21:46.755 "min_cntlid": 1, 00:21:46.755 "max_cntlid": 65519, 00:21:46.755 "namespaces": [ 00:21:46.755 { 00:21:46.755 "nsid": 1, 00:21:46.755 "bdev_name": "Malloc0", 00:21:46.755 "name": "Malloc0", 00:21:46.755 "nguid": "BBCE314FA15743D78D3C3094C15F545B", 00:21:46.755 "uuid": "bbce314f-a157-43d7-8d3c-3094c15f545b" 00:21:46.755 } 00:21:46.755 ] 00:21:46.755 } 00:21:46.755 ] 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3385357 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:46.755 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:46.756 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:21:46.756 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:21:46.756 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.014 Malloc1 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.014 [ 00:21:47.014 { 00:21:47.014 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:47.014 "subtype": "Discovery", 00:21:47.014 "listen_addresses": [], 00:21:47.014 "allow_any_host": true, 00:21:47.014 "hosts": [] 00:21:47.014 }, 00:21:47.014 { 00:21:47.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.014 "subtype": "NVMe", 00:21:47.014 "listen_addresses": [ 00:21:47.014 { 00:21:47.014 "trtype": "TCP", 00:21:47.014 "adrfam": "IPv4", 00:21:47.014 "traddr": "10.0.0.2", 00:21:47.014 "trsvcid": "4420" 00:21:47.014 } 00:21:47.014 ], 00:21:47.014 "allow_any_host": true, 00:21:47.014 "hosts": [], 00:21:47.014 "serial_number": "SPDK00000000000001", 00:21:47.014 "model_number": "SPDK bdev Controller", 00:21:47.014 "max_namespaces": 2, 00:21:47.014 "min_cntlid": 1, 00:21:47.014 "max_cntlid": 65519, 00:21:47.014 "namespaces": [ 00:21:47.014 { 00:21:47.014 "nsid": 1, 00:21:47.014 "bdev_name": "Malloc0", 00:21:47.014 "name": "Malloc0", 00:21:47.014 "nguid": "BBCE314FA15743D78D3C3094C15F545B", 00:21:47.014 "uuid": "bbce314f-a157-43d7-8d3c-3094c15f545b" 00:21:47.014 }, 00:21:47.014 { 00:21:47.014 "nsid": 2, 00:21:47.014 "bdev_name": "Malloc1", 00:21:47.014 "name": "Malloc1", 00:21:47.014 "nguid": "A654BF8AE13A451DB53D6B3A3A1C5DA1", 00:21:47.014 "uuid": "a654bf8a-e13a-451d-b53d-6b3a3a1c5da1" 00:21:47.014 } 00:21:47.014 ] 00:21:47.014 } 00:21:47.014 ] 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3385357 00:21:47.014 Asynchronous Event Request test 00:21:47.014 Attaching to 10.0.0.2 00:21:47.014 Attached to 10.0.0.2 00:21:47.014 Registering asynchronous event callbacks... 00:21:47.014 Starting namespace attribute notice tests for all controllers... 00:21:47.014 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:47.014 aer_cb - Changed Namespace 00:21:47.014 Cleaning up... 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.014 09:12:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.014 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.014 rmmod nvme_tcp 00:21:47.274 rmmod nvme_fabrics 00:21:47.274 rmmod nvme_keyring 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3385218 ']' 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3385218 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3385218 ']' 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3385218 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3385218 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3385218' 00:21:47.274 killing process with pid 3385218 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3385218 00:21:47.274 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3385218 00:21:47.532 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.532 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.532 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.532 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:47.532 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:47.532 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.532 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.533 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.533 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.533 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.533 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.533 09:12:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.440 09:12:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:49.440 00:21:49.440 real 0m5.656s 00:21:49.440 user 0m4.556s 00:21:49.440 sys 0m2.098s 00:21:49.440 09:12:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:49.440 09:12:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.440 ************************************ 00:21:49.440 END TEST nvmf_aer 00:21:49.440 ************************************ 00:21:49.440 09:12:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:49.440 09:12:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:49.440 09:12:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:49.440 09:12:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.699 ************************************ 00:21:49.699 START TEST nvmf_async_init 00:21:49.699 ************************************ 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:49.699 * Looking for test storage... 00:21:49.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:49.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.699 --rc genhtml_branch_coverage=1 00:21:49.699 --rc genhtml_function_coverage=1 00:21:49.699 --rc genhtml_legend=1 00:21:49.699 --rc geninfo_all_blocks=1 00:21:49.699 --rc geninfo_unexecuted_blocks=1 00:21:49.699 00:21:49.699 ' 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:49.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.699 --rc genhtml_branch_coverage=1 00:21:49.699 --rc genhtml_function_coverage=1 00:21:49.699 --rc genhtml_legend=1 00:21:49.699 --rc geninfo_all_blocks=1 00:21:49.699 --rc geninfo_unexecuted_blocks=1 00:21:49.699 00:21:49.699 ' 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:49.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.699 --rc genhtml_branch_coverage=1 00:21:49.699 --rc genhtml_function_coverage=1 00:21:49.699 --rc genhtml_legend=1 00:21:49.699 --rc geninfo_all_blocks=1 00:21:49.699 --rc geninfo_unexecuted_blocks=1 00:21:49.699 00:21:49.699 ' 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:49.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.699 --rc genhtml_branch_coverage=1 00:21:49.699 --rc genhtml_function_coverage=1 00:21:49.699 --rc genhtml_legend=1 00:21:49.699 --rc geninfo_all_blocks=1 00:21:49.699 --rc geninfo_unexecuted_blocks=1 00:21:49.699 00:21:49.699 ' 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.699 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:49.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=752d823f10d3438aa5146638595c008e 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.700 09:12:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:52.230 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:52.230 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.230 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:52.231 Found net devices under 0000:09:00.0: cvl_0_0 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:52.231 Found net devices under 0000:09:00.1: cvl_0_1 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:21:52.231 00:21:52.231 --- 10.0.0.2 ping statistics --- 00:21:52.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.231 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:21:52.231 00:21:52.231 --- 10.0.0.1 ping statistics --- 00:21:52.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.231 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3387304 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3387304 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3387304 ']' 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:52.231 09:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.231 [2024-11-20 09:12:29.023049] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:52.231 [2024-11-20 09:12:29.023132] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.231 [2024-11-20 09:12:29.099128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.231 [2024-11-20 09:12:29.162032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.231 [2024-11-20 09:12:29.162072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.231 [2024-11-20 09:12:29.162101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.231 [2024-11-20 09:12:29.162113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.231 [2024-11-20 09:12:29.162123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.231 [2024-11-20 09:12:29.162713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.489 [2024-11-20 09:12:29.308780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.489 null0 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 752d823f10d3438aa5146638595c008e 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.489 [2024-11-20 09:12:29.349029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.489 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.747 nvme0n1 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.747 [ 00:21:52.747 { 00:21:52.747 "name": "nvme0n1", 00:21:52.747 "aliases": [ 00:21:52.747 "752d823f-10d3-438a-a514-6638595c008e" 00:21:52.747 ], 00:21:52.747 "product_name": "NVMe disk", 00:21:52.747 "block_size": 512, 00:21:52.747 "num_blocks": 2097152, 00:21:52.747 "uuid": "752d823f-10d3-438a-a514-6638595c008e", 00:21:52.747 "numa_id": 0, 00:21:52.747 "assigned_rate_limits": { 00:21:52.747 "rw_ios_per_sec": 0, 00:21:52.747 "rw_mbytes_per_sec": 0, 00:21:52.747 "r_mbytes_per_sec": 0, 00:21:52.747 "w_mbytes_per_sec": 0 00:21:52.747 }, 00:21:52.747 "claimed": false, 00:21:52.747 "zoned": false, 00:21:52.747 "supported_io_types": { 00:21:52.747 "read": true, 00:21:52.747 "write": true, 00:21:52.747 "unmap": false, 00:21:52.747 "flush": true, 00:21:52.747 "reset": true, 00:21:52.747 "nvme_admin": true, 00:21:52.747 "nvme_io": true, 00:21:52.747 "nvme_io_md": false, 00:21:52.747 "write_zeroes": true, 00:21:52.747 "zcopy": false, 00:21:52.747 "get_zone_info": false, 00:21:52.747 "zone_management": false, 00:21:52.747 "zone_append": false, 00:21:52.747 "compare": true, 00:21:52.747 "compare_and_write": true, 00:21:52.747 "abort": true, 00:21:52.747 "seek_hole": false, 00:21:52.747 "seek_data": false, 00:21:52.747 "copy": true, 00:21:52.747 "nvme_iov_md": false 00:21:52.747 }, 00:21:52.747 "memory_domains": [ 00:21:52.747 { 00:21:52.747 "dma_device_id": "system", 00:21:52.747 "dma_device_type": 1 00:21:52.747 } 00:21:52.747 ], 00:21:52.747 "driver_specific": { 00:21:52.747 "nvme": [ 00:21:52.747 { 00:21:52.747 "trid": { 00:21:52.747 "trtype": "TCP", 00:21:52.747 "adrfam": "IPv4", 00:21:52.747 "traddr": "10.0.0.2", 00:21:52.747 "trsvcid": "4420", 00:21:52.747 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:52.747 }, 00:21:52.747 "ctrlr_data": { 00:21:52.747 "cntlid": 1, 00:21:52.747 "vendor_id": "0x8086", 00:21:52.747 "model_number": "SPDK bdev Controller", 00:21:52.747 "serial_number": "00000000000000000000", 00:21:52.747 "firmware_revision": "25.01", 00:21:52.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:52.747 "oacs": { 00:21:52.747 "security": 0, 00:21:52.747 "format": 0, 00:21:52.747 "firmware": 0, 00:21:52.747 "ns_manage": 0 00:21:52.747 }, 00:21:52.747 "multi_ctrlr": true, 00:21:52.747 "ana_reporting": false 00:21:52.747 }, 00:21:52.747 "vs": { 00:21:52.747 "nvme_version": "1.3" 00:21:52.747 }, 00:21:52.747 "ns_data": { 00:21:52.747 "id": 1, 00:21:52.747 "can_share": true 00:21:52.747 } 00:21:52.747 } 00:21:52.747 ], 00:21:52.747 "mp_policy": "active_passive" 00:21:52.747 } 00:21:52.747 } 00:21:52.747 ] 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.747 [2024-11-20 09:12:29.598049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:52.747 [2024-11-20 09:12:29.598142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ecb20 (9): Bad file descriptor 00:21:52.747 [2024-11-20 09:12:29.730437] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.747 [ 00:21:52.747 { 00:21:52.747 "name": "nvme0n1", 00:21:52.747 "aliases": [ 00:21:52.747 "752d823f-10d3-438a-a514-6638595c008e" 00:21:52.747 ], 00:21:52.747 "product_name": "NVMe disk", 00:21:52.747 "block_size": 512, 00:21:52.747 "num_blocks": 2097152, 00:21:52.747 "uuid": "752d823f-10d3-438a-a514-6638595c008e", 00:21:52.747 "numa_id": 0, 00:21:52.747 "assigned_rate_limits": { 00:21:52.747 "rw_ios_per_sec": 0, 00:21:52.747 "rw_mbytes_per_sec": 0, 00:21:52.747 "r_mbytes_per_sec": 0, 00:21:52.747 "w_mbytes_per_sec": 0 00:21:52.747 }, 00:21:52.747 "claimed": false, 00:21:52.747 "zoned": false, 00:21:52.747 "supported_io_types": { 00:21:52.747 "read": true, 00:21:52.747 "write": true, 00:21:52.747 "unmap": false, 00:21:52.747 "flush": true, 00:21:52.747 "reset": true, 00:21:52.747 "nvme_admin": true, 00:21:52.747 "nvme_io": true, 00:21:52.747 "nvme_io_md": false, 00:21:52.747 "write_zeroes": true, 00:21:52.747 "zcopy": false, 00:21:52.747 "get_zone_info": false, 00:21:52.747 "zone_management": false, 00:21:52.747 "zone_append": false, 00:21:52.747 "compare": true, 00:21:52.747 "compare_and_write": true, 00:21:52.747 "abort": true, 00:21:52.747 "seek_hole": false, 00:21:52.747 "seek_data": false, 00:21:52.747 "copy": true, 00:21:52.747 "nvme_iov_md": false 00:21:52.747 }, 00:21:52.747 "memory_domains": [ 00:21:52.747 { 00:21:52.747 "dma_device_id": "system", 00:21:52.747 "dma_device_type": 1 00:21:52.747 } 00:21:52.747 ], 00:21:52.747 "driver_specific": { 00:21:52.747 "nvme": [ 00:21:52.747 { 00:21:52.747 "trid": { 00:21:52.747 "trtype": "TCP", 00:21:52.747 "adrfam": "IPv4", 00:21:52.747 "traddr": "10.0.0.2", 00:21:52.747 "trsvcid": "4420", 00:21:52.747 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:52.747 }, 00:21:52.747 "ctrlr_data": { 00:21:52.747 "cntlid": 2, 00:21:52.747 "vendor_id": "0x8086", 00:21:52.747 "model_number": "SPDK bdev Controller", 00:21:52.747 "serial_number": "00000000000000000000", 00:21:52.747 "firmware_revision": "25.01", 00:21:52.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:52.747 "oacs": { 00:21:52.747 "security": 0, 00:21:52.747 "format": 0, 00:21:52.747 "firmware": 0, 00:21:52.747 "ns_manage": 0 00:21:52.747 }, 00:21:52.747 "multi_ctrlr": true, 00:21:52.747 "ana_reporting": false 00:21:52.747 }, 00:21:52.747 "vs": { 00:21:52.747 "nvme_version": "1.3" 00:21:52.747 }, 00:21:52.747 "ns_data": { 00:21:52.747 "id": 1, 00:21:52.747 "can_share": true 00:21:52.747 } 00:21:52.747 } 00:21:52.747 ], 00:21:52.747 "mp_policy": "active_passive" 00:21:52.747 } 00:21:52.747 } 00:21:52.747 ] 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.T8OuizShgU 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.T8OuizShgU 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.T8OuizShgU 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.747 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:53.005 [2024-11-20 09:12:29.786643] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.005 [2024-11-20 09:12:29.786779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:53.005 [2024-11-20 09:12:29.802699] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:53.005 nvme0n1 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:53.005 [ 00:21:53.005 { 00:21:53.005 "name": "nvme0n1", 00:21:53.005 "aliases": [ 00:21:53.005 "752d823f-10d3-438a-a514-6638595c008e" 00:21:53.005 ], 00:21:53.005 "product_name": "NVMe disk", 00:21:53.005 "block_size": 512, 00:21:53.005 "num_blocks": 2097152, 00:21:53.005 "uuid": "752d823f-10d3-438a-a514-6638595c008e", 00:21:53.005 "numa_id": 0, 00:21:53.005 "assigned_rate_limits": { 00:21:53.005 "rw_ios_per_sec": 0, 00:21:53.005 "rw_mbytes_per_sec": 0, 00:21:53.005 "r_mbytes_per_sec": 0, 00:21:53.005 "w_mbytes_per_sec": 0 00:21:53.005 }, 00:21:53.005 "claimed": false, 00:21:53.005 "zoned": false, 00:21:53.005 "supported_io_types": { 00:21:53.005 "read": true, 00:21:53.005 "write": true, 00:21:53.005 "unmap": false, 00:21:53.005 "flush": true, 00:21:53.005 "reset": true, 00:21:53.005 "nvme_admin": true, 00:21:53.005 "nvme_io": true, 00:21:53.005 "nvme_io_md": false, 00:21:53.005 "write_zeroes": true, 00:21:53.005 "zcopy": false, 00:21:53.005 "get_zone_info": false, 00:21:53.005 "zone_management": false, 00:21:53.005 "zone_append": false, 00:21:53.005 "compare": true, 00:21:53.005 "compare_and_write": true, 00:21:53.005 "abort": true, 00:21:53.005 "seek_hole": false, 00:21:53.005 "seek_data": false, 00:21:53.005 "copy": true, 00:21:53.005 "nvme_iov_md": false 00:21:53.005 }, 00:21:53.005 "memory_domains": [ 00:21:53.005 { 00:21:53.005 "dma_device_id": "system", 00:21:53.005 "dma_device_type": 1 00:21:53.005 } 00:21:53.005 ], 00:21:53.005 "driver_specific": { 00:21:53.005 "nvme": [ 00:21:53.005 { 00:21:53.005 "trid": { 00:21:53.005 "trtype": "TCP", 00:21:53.005 "adrfam": "IPv4", 00:21:53.005 "traddr": "10.0.0.2", 00:21:53.005 "trsvcid": "4421", 00:21:53.005 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:53.005 }, 00:21:53.005 "ctrlr_data": { 00:21:53.005 "cntlid": 3, 00:21:53.005 "vendor_id": "0x8086", 00:21:53.005 "model_number": "SPDK bdev Controller", 00:21:53.005 "serial_number": "00000000000000000000", 00:21:53.005 "firmware_revision": "25.01", 00:21:53.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.005 "oacs": { 00:21:53.005 "security": 0, 00:21:53.005 "format": 0, 00:21:53.005 "firmware": 0, 00:21:53.005 "ns_manage": 0 00:21:53.005 }, 00:21:53.005 "multi_ctrlr": true, 00:21:53.005 "ana_reporting": false 00:21:53.005 }, 00:21:53.005 "vs": { 00:21:53.005 "nvme_version": "1.3" 00:21:53.005 }, 00:21:53.005 "ns_data": { 00:21:53.005 "id": 1, 00:21:53.005 "can_share": true 00:21:53.005 } 00:21:53.005 } 00:21:53.005 ], 00:21:53.005 "mp_policy": "active_passive" 00:21:53.005 } 00:21:53.005 } 00:21:53.005 ] 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.T8OuizShgU 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.005 rmmod nvme_tcp 00:21:53.005 rmmod nvme_fabrics 00:21:53.005 rmmod nvme_keyring 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3387304 ']' 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3387304 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3387304 ']' 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3387304 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:53.005 09:12:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3387304 00:21:53.005 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:53.005 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:53.005 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3387304' 00:21:53.005 killing process with pid 3387304 00:21:53.005 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3387304 00:21:53.005 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3387304 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.263 09:12:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:55.795 00:21:55.795 real 0m5.775s 00:21:55.795 user 0m2.232s 00:21:55.795 sys 0m1.982s 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.795 ************************************ 00:21:55.795 END TEST nvmf_async_init 00:21:55.795 ************************************ 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.795 ************************************ 00:21:55.795 START TEST dma 00:21:55.795 ************************************ 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:55.795 * Looking for test storage... 00:21:55.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:55.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.795 --rc genhtml_branch_coverage=1 00:21:55.795 --rc genhtml_function_coverage=1 00:21:55.795 --rc genhtml_legend=1 00:21:55.795 --rc geninfo_all_blocks=1 00:21:55.795 --rc geninfo_unexecuted_blocks=1 00:21:55.795 00:21:55.795 ' 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:55.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.795 --rc genhtml_branch_coverage=1 00:21:55.795 --rc genhtml_function_coverage=1 00:21:55.795 --rc genhtml_legend=1 00:21:55.795 --rc geninfo_all_blocks=1 00:21:55.795 --rc geninfo_unexecuted_blocks=1 00:21:55.795 00:21:55.795 ' 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:55.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.795 --rc genhtml_branch_coverage=1 00:21:55.795 --rc genhtml_function_coverage=1 00:21:55.795 --rc genhtml_legend=1 00:21:55.795 --rc geninfo_all_blocks=1 00:21:55.795 --rc geninfo_unexecuted_blocks=1 00:21:55.795 00:21:55.795 ' 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:55.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.795 --rc genhtml_branch_coverage=1 00:21:55.795 --rc genhtml_function_coverage=1 00:21:55.795 --rc genhtml_legend=1 00:21:55.795 --rc geninfo_all_blocks=1 00:21:55.795 --rc geninfo_unexecuted_blocks=1 00:21:55.795 00:21:55.795 ' 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.795 09:12:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:55.796 00:21:55.796 real 0m0.154s 00:21:55.796 user 0m0.098s 00:21:55.796 sys 0m0.064s 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:55.796 ************************************ 00:21:55.796 END TEST dma 00:21:55.796 ************************************ 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.796 ************************************ 00:21:55.796 START TEST nvmf_identify 00:21:55.796 ************************************ 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:55.796 * Looking for test storage... 00:21:55.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:55.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.796 --rc genhtml_branch_coverage=1 00:21:55.796 --rc genhtml_function_coverage=1 00:21:55.796 --rc genhtml_legend=1 00:21:55.796 --rc geninfo_all_blocks=1 00:21:55.796 --rc geninfo_unexecuted_blocks=1 00:21:55.796 00:21:55.796 ' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:55.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.796 --rc genhtml_branch_coverage=1 00:21:55.796 --rc genhtml_function_coverage=1 00:21:55.796 --rc genhtml_legend=1 00:21:55.796 --rc geninfo_all_blocks=1 00:21:55.796 --rc geninfo_unexecuted_blocks=1 00:21:55.796 00:21:55.796 ' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:55.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.796 --rc genhtml_branch_coverage=1 00:21:55.796 --rc genhtml_function_coverage=1 00:21:55.796 --rc genhtml_legend=1 00:21:55.796 --rc geninfo_all_blocks=1 00:21:55.796 --rc geninfo_unexecuted_blocks=1 00:21:55.796 00:21:55.796 ' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:55.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.796 --rc genhtml_branch_coverage=1 00:21:55.796 --rc genhtml_function_coverage=1 00:21:55.796 --rc genhtml_legend=1 00:21:55.796 --rc geninfo_all_blocks=1 00:21:55.796 --rc geninfo_unexecuted_blocks=1 00:21:55.796 00:21:55.796 ' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.796 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.797 09:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.327 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:58.328 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:58.328 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:58.328 Found net devices under 0000:09:00.0: cvl_0_0 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:58.328 Found net devices under 0000:09:00.1: cvl_0_1 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:21:58.328 00:21:58.328 --- 10.0.0.2 ping statistics --- 00:21:58.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.328 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:21:58.328 00:21:58.328 --- 10.0.0.1 ping statistics --- 00:21:58.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.328 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3389562 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3389562 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3389562 ']' 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:58.328 09:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.328 [2024-11-20 09:12:35.006144] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:58.328 [2024-11-20 09:12:35.006237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.328 [2024-11-20 09:12:35.080628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.328 [2024-11-20 09:12:35.143619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.328 [2024-11-20 09:12:35.143671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.328 [2024-11-20 09:12:35.143699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.328 [2024-11-20 09:12:35.143711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.328 [2024-11-20 09:12:35.143721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.328 [2024-11-20 09:12:35.145336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.328 [2024-11-20 09:12:35.145402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.328 [2024-11-20 09:12:35.145467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.328 [2024-11-20 09:12:35.145470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.328 [2024-11-20 09:12:35.277762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.328 Malloc0 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.328 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.589 [2024-11-20 09:12:35.368579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.589 [ 00:21:58.589 { 00:21:58.589 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:58.589 "subtype": "Discovery", 00:21:58.589 "listen_addresses": [ 00:21:58.589 { 00:21:58.589 "trtype": "TCP", 00:21:58.589 "adrfam": "IPv4", 00:21:58.589 "traddr": "10.0.0.2", 00:21:58.589 "trsvcid": "4420" 00:21:58.589 } 00:21:58.589 ], 00:21:58.589 "allow_any_host": true, 00:21:58.589 "hosts": [] 00:21:58.589 }, 00:21:58.589 { 00:21:58.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.589 "subtype": "NVMe", 00:21:58.589 "listen_addresses": [ 00:21:58.589 { 00:21:58.589 "trtype": "TCP", 00:21:58.589 "adrfam": "IPv4", 00:21:58.589 "traddr": "10.0.0.2", 00:21:58.589 "trsvcid": "4420" 00:21:58.589 } 00:21:58.589 ], 00:21:58.589 "allow_any_host": true, 00:21:58.589 "hosts": [], 00:21:58.589 "serial_number": "SPDK00000000000001", 00:21:58.589 "model_number": "SPDK bdev Controller", 00:21:58.589 "max_namespaces": 32, 00:21:58.589 "min_cntlid": 1, 00:21:58.589 "max_cntlid": 65519, 00:21:58.589 "namespaces": [ 00:21:58.589 { 00:21:58.589 "nsid": 1, 00:21:58.589 "bdev_name": "Malloc0", 00:21:58.589 "name": "Malloc0", 00:21:58.589 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:58.589 "eui64": "ABCDEF0123456789", 00:21:58.589 "uuid": "6e24fadb-3c4f-4288-851b-fca1f6b58ed4" 00:21:58.589 } 00:21:58.589 ] 00:21:58.589 } 00:21:58.589 ] 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.589 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:58.589 [2024-11-20 09:12:35.411119] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:58.589 [2024-11-20 09:12:35.411163] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389592 ] 00:21:58.589 [2024-11-20 09:12:35.461634] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:58.589 [2024-11-20 09:12:35.461724] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:58.589 [2024-11-20 09:12:35.461736] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:58.589 [2024-11-20 09:12:35.461756] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:58.589 [2024-11-20 09:12:35.461772] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:58.589 [2024-11-20 09:12:35.465779] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:58.589 [2024-11-20 09:12:35.465843] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1982690 0 00:21:58.589 [2024-11-20 09:12:35.465971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:58.589 [2024-11-20 09:12:35.465992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:58.589 [2024-11-20 09:12:35.466001] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:58.589 [2024-11-20 09:12:35.466007] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:58.589 [2024-11-20 09:12:35.466063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.589 [2024-11-20 09:12:35.466079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.589 [2024-11-20 09:12:35.466087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.589 [2024-11-20 09:12:35.466108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:58.589 [2024-11-20 09:12:35.466134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.589 [2024-11-20 09:12:35.472319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.589 [2024-11-20 09:12:35.472338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.589 [2024-11-20 09:12:35.472345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.589 [2024-11-20 09:12:35.472369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.589 [2024-11-20 09:12:35.472386] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:58.589 [2024-11-20 09:12:35.472399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:58.589 [2024-11-20 09:12:35.472410] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:58.589 [2024-11-20 09:12:35.472436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.589 [2024-11-20 09:12:35.472446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.589 [2024-11-20 09:12:35.472452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.589 [2024-11-20 09:12:35.472463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.589 [2024-11-20 09:12:35.472488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.589 [2024-11-20 09:12:35.472579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.589 [2024-11-20 09:12:35.472592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.589 [2024-11-20 09:12:35.472599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.589 [2024-11-20 09:12:35.472605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.589 [2024-11-20 09:12:35.472614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:58.589 [2024-11-20 09:12:35.472627] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:58.589 [2024-11-20 09:12:35.472640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.589 [2024-11-20 09:12:35.472648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.589 [2024-11-20 09:12:35.472654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.589 [2024-11-20 09:12:35.472664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.589 [2024-11-20 09:12:35.472686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.589 [2024-11-20 09:12:35.472769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.589 [2024-11-20 09:12:35.472783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.589 [2024-11-20 09:12:35.472790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.589 [2024-11-20 09:12:35.472796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.589 [2024-11-20 09:12:35.472807] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:58.589 [2024-11-20 09:12:35.472822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:58.589 [2024-11-20 09:12:35.472839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.472847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.472854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.590 [2024-11-20 09:12:35.472864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.590 [2024-11-20 09:12:35.472885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.590 [2024-11-20 09:12:35.472957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.590 [2024-11-20 09:12:35.472969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.590 [2024-11-20 09:12:35.472976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.472983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.590 [2024-11-20 09:12:35.472992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:58.590 [2024-11-20 09:12:35.473008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.590 [2024-11-20 09:12:35.473033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.590 [2024-11-20 09:12:35.473054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.590 [2024-11-20 09:12:35.473122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.590 [2024-11-20 09:12:35.473134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.590 [2024-11-20 09:12:35.473141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.590 [2024-11-20 09:12:35.473156] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:58.590 [2024-11-20 09:12:35.473165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:58.590 [2024-11-20 09:12:35.473177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:58.590 [2024-11-20 09:12:35.473289] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:58.590 [2024-11-20 09:12:35.473297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:58.590 [2024-11-20 09:12:35.473324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.590 [2024-11-20 09:12:35.473349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.590 [2024-11-20 09:12:35.473371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.590 [2024-11-20 09:12:35.473467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.590 [2024-11-20 09:12:35.473482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.590 [2024-11-20 09:12:35.473488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.590 [2024-11-20 09:12:35.473511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:58.590 [2024-11-20 09:12:35.473528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.590 [2024-11-20 09:12:35.473554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.590 [2024-11-20 09:12:35.473575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.590 [2024-11-20 09:12:35.473647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.590 [2024-11-20 09:12:35.473659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.590 [2024-11-20 09:12:35.473666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.590 [2024-11-20 09:12:35.473679] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:58.590 [2024-11-20 09:12:35.473688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:58.590 [2024-11-20 09:12:35.473702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:58.590 [2024-11-20 09:12:35.473717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:58.590 [2024-11-20 09:12:35.473736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.590 [2024-11-20 09:12:35.473755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.590 [2024-11-20 09:12:35.473776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.590 [2024-11-20 09:12:35.473890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.590 [2024-11-20 09:12:35.473904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.590 [2024-11-20 09:12:35.473911] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473918] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982690): datao=0, datal=4096, cccid=0 00:21:58.590 [2024-11-20 09:12:35.473926] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e4100) on tqpair(0x1982690): expected_datao=0, payload_size=4096 00:21:58.590 [2024-11-20 09:12:35.473934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473952] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473963] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.590 [2024-11-20 09:12:35.473985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.590 [2024-11-20 09:12:35.473992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.473998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.590 [2024-11-20 09:12:35.474012] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:58.590 [2024-11-20 09:12:35.474021] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:58.590 [2024-11-20 09:12:35.474029] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:58.590 [2024-11-20 09:12:35.474047] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:58.590 [2024-11-20 09:12:35.474058] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:58.590 [2024-11-20 09:12:35.474066] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:58.590 [2024-11-20 09:12:35.474085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:58.590 [2024-11-20 09:12:35.474099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.474107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.474113] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.590 [2024-11-20 09:12:35.474124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.590 [2024-11-20 09:12:35.474145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.590 [2024-11-20 09:12:35.474247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.590 [2024-11-20 09:12:35.474261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.590 [2024-11-20 09:12:35.474267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.474289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.590 [2024-11-20 09:12:35.474312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.474321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.474328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1982690) 00:21:58.590 [2024-11-20 09:12:35.474338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.590 [2024-11-20 09:12:35.474348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.474355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.590 [2024-11-20 09:12:35.474361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1982690) 00:21:58.591 [2024-11-20 09:12:35.474370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.591 [2024-11-20 09:12:35.474379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1982690) 00:21:58.591 [2024-11-20 09:12:35.474401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.591 [2024-11-20 09:12:35.474411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.591 [2024-11-20 09:12:35.474432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.591 [2024-11-20 09:12:35.474441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:58.591 [2024-11-20 09:12:35.474457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:58.591 [2024-11-20 09:12:35.474468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982690) 00:21:58.591 [2024-11-20 09:12:35.474490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.591 [2024-11-20 09:12:35.474513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4100, cid 0, qid 0 00:21:58.591 [2024-11-20 09:12:35.474525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4280, cid 1, qid 0 00:21:58.591 [2024-11-20 09:12:35.474532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4400, cid 2, qid 0 00:21:58.591 [2024-11-20 09:12:35.474540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.591 [2024-11-20 09:12:35.474548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4700, cid 4, qid 0 00:21:58.591 [2024-11-20 09:12:35.474685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.591 [2024-11-20 09:12:35.474698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.591 [2024-11-20 09:12:35.474704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4700) on tqpair=0x1982690 00:21:58.591 [2024-11-20 09:12:35.474727] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:58.591 [2024-11-20 09:12:35.474737] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:58.591 [2024-11-20 09:12:35.474755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982690) 00:21:58.591 [2024-11-20 09:12:35.474775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.591 [2024-11-20 09:12:35.474796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4700, cid 4, qid 0 00:21:58.591 [2024-11-20 09:12:35.474912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.591 [2024-11-20 09:12:35.474924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.591 [2024-11-20 09:12:35.474931] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474938] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982690): datao=0, datal=4096, cccid=4 00:21:58.591 [2024-11-20 09:12:35.474945] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e4700) on tqpair(0x1982690): expected_datao=0, payload_size=4096 00:21:58.591 [2024-11-20 09:12:35.474952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474968] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.474977] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.591 [2024-11-20 09:12:35.516378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.591 [2024-11-20 09:12:35.516386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4700) on tqpair=0x1982690 00:21:58.591 [2024-11-20 09:12:35.516415] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:58.591 [2024-11-20 09:12:35.516458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982690) 00:21:58.591 [2024-11-20 09:12:35.516481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.591 [2024-11-20 09:12:35.516493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1982690) 00:21:58.591 [2024-11-20 09:12:35.516521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.591 [2024-11-20 09:12:35.516552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4700, cid 4, qid 0 00:21:58.591 [2024-11-20 09:12:35.516565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4880, cid 5, qid 0 00:21:58.591 [2024-11-20 09:12:35.516725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.591 [2024-11-20 09:12:35.516740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.591 [2024-11-20 09:12:35.516747] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516753] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982690): datao=0, datal=1024, cccid=4 00:21:58.591 [2024-11-20 09:12:35.516761] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e4700) on tqpair(0x1982690): expected_datao=0, payload_size=1024 00:21:58.591 [2024-11-20 09:12:35.516768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516778] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516786] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.591 [2024-11-20 09:12:35.516803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.591 [2024-11-20 09:12:35.516810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.516816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4880) on tqpair=0x1982690 00:21:58.591 [2024-11-20 09:12:35.557372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.591 [2024-11-20 09:12:35.557392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.591 [2024-11-20 09:12:35.557399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4700) on tqpair=0x1982690 00:21:58.591 [2024-11-20 09:12:35.557425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982690) 00:21:58.591 [2024-11-20 09:12:35.557446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.591 [2024-11-20 09:12:35.557475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4700, cid 4, qid 0 00:21:58.591 [2024-11-20 09:12:35.557591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.591 [2024-11-20 09:12:35.557603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.591 [2024-11-20 09:12:35.557610] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557617] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982690): datao=0, datal=3072, cccid=4 00:21:58.591 [2024-11-20 09:12:35.557624] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e4700) on tqpair(0x1982690): expected_datao=0, payload_size=3072 00:21:58.591 [2024-11-20 09:12:35.557631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557642] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557650] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.591 [2024-11-20 09:12:35.557671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.591 [2024-11-20 09:12:35.557677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4700) on tqpair=0x1982690 00:21:58.591 [2024-11-20 09:12:35.557704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1982690) 00:21:58.591 [2024-11-20 09:12:35.557724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.591 [2024-11-20 09:12:35.557752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4700, cid 4, qid 0 00:21:58.591 [2024-11-20 09:12:35.557844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.591 [2024-11-20 09:12:35.557856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.591 [2024-11-20 09:12:35.557862] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557869] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1982690): datao=0, datal=8, cccid=4 00:21:58.591 [2024-11-20 09:12:35.557876] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e4700) on tqpair(0x1982690): expected_datao=0, payload_size=8 00:21:58.591 [2024-11-20 09:12:35.557883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557893] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.557900] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.603318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.591 [2024-11-20 09:12:35.603338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.591 [2024-11-20 09:12:35.603345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.591 [2024-11-20 09:12:35.603367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4700) on tqpair=0x1982690 00:21:58.591 ===================================================== 00:21:58.592 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:58.592 ===================================================== 00:21:58.592 Controller Capabilities/Features 00:21:58.592 ================================ 00:21:58.592 Vendor ID: 0000 00:21:58.592 Subsystem Vendor ID: 0000 00:21:58.592 Serial Number: .................... 00:21:58.592 Model Number: ........................................ 00:21:58.592 Firmware Version: 25.01 00:21:58.592 Recommended Arb Burst: 0 00:21:58.592 IEEE OUI Identifier: 00 00 00 00:21:58.592 Multi-path I/O 00:21:58.592 May have multiple subsystem ports: No 00:21:58.592 May have multiple controllers: No 00:21:58.592 Associated with SR-IOV VF: No 00:21:58.592 Max Data Transfer Size: 131072 00:21:58.592 Max Number of Namespaces: 0 00:21:58.592 Max Number of I/O Queues: 1024 00:21:58.592 NVMe Specification Version (VS): 1.3 00:21:58.592 NVMe Specification Version (Identify): 1.3 00:21:58.592 Maximum Queue Entries: 128 00:21:58.592 Contiguous Queues Required: Yes 00:21:58.592 Arbitration Mechanisms Supported 00:21:58.592 Weighted Round Robin: Not Supported 00:21:58.592 Vendor Specific: Not Supported 00:21:58.592 Reset Timeout: 15000 ms 00:21:58.592 Doorbell Stride: 4 bytes 00:21:58.592 NVM Subsystem Reset: Not Supported 00:21:58.592 Command Sets Supported 00:21:58.592 NVM Command Set: Supported 00:21:58.592 Boot Partition: Not Supported 00:21:58.592 Memory Page Size Minimum: 4096 bytes 00:21:58.592 Memory Page Size Maximum: 4096 bytes 00:21:58.592 Persistent Memory Region: Not Supported 00:21:58.592 Optional Asynchronous Events Supported 00:21:58.592 Namespace Attribute Notices: Not Supported 00:21:58.592 Firmware Activation Notices: Not Supported 00:21:58.592 ANA Change Notices: Not Supported 00:21:58.592 PLE Aggregate Log Change Notices: Not Supported 00:21:58.592 LBA Status Info Alert Notices: Not Supported 00:21:58.592 EGE Aggregate Log Change Notices: Not Supported 00:21:58.592 Normal NVM Subsystem Shutdown event: Not Supported 00:21:58.592 Zone Descriptor Change Notices: Not Supported 00:21:58.592 Discovery Log Change Notices: Supported 00:21:58.592 Controller Attributes 00:21:58.592 128-bit Host Identifier: Not Supported 00:21:58.592 Non-Operational Permissive Mode: Not Supported 00:21:58.592 NVM Sets: Not Supported 00:21:58.592 Read Recovery Levels: Not Supported 00:21:58.592 Endurance Groups: Not Supported 00:21:58.592 Predictable Latency Mode: Not Supported 00:21:58.592 Traffic Based Keep ALive: Not Supported 00:21:58.592 Namespace Granularity: Not Supported 00:21:58.592 SQ Associations: Not Supported 00:21:58.592 UUID List: Not Supported 00:21:58.592 Multi-Domain Subsystem: Not Supported 00:21:58.592 Fixed Capacity Management: Not Supported 00:21:58.592 Variable Capacity Management: Not Supported 00:21:58.592 Delete Endurance Group: Not Supported 00:21:58.592 Delete NVM Set: Not Supported 00:21:58.592 Extended LBA Formats Supported: Not Supported 00:21:58.592 Flexible Data Placement Supported: Not Supported 00:21:58.592 00:21:58.592 Controller Memory Buffer Support 00:21:58.592 ================================ 00:21:58.592 Supported: No 00:21:58.592 00:21:58.592 Persistent Memory Region Support 00:21:58.592 ================================ 00:21:58.592 Supported: No 00:21:58.592 00:21:58.592 Admin Command Set Attributes 00:21:58.592 ============================ 00:21:58.592 Security Send/Receive: Not Supported 00:21:58.592 Format NVM: Not Supported 00:21:58.592 Firmware Activate/Download: Not Supported 00:21:58.592 Namespace Management: Not Supported 00:21:58.592 Device Self-Test: Not Supported 00:21:58.592 Directives: Not Supported 00:21:58.592 NVMe-MI: Not Supported 00:21:58.592 Virtualization Management: Not Supported 00:21:58.592 Doorbell Buffer Config: Not Supported 00:21:58.592 Get LBA Status Capability: Not Supported 00:21:58.592 Command & Feature Lockdown Capability: Not Supported 00:21:58.592 Abort Command Limit: 1 00:21:58.592 Async Event Request Limit: 4 00:21:58.592 Number of Firmware Slots: N/A 00:21:58.592 Firmware Slot 1 Read-Only: N/A 00:21:58.592 Firmware Activation Without Reset: N/A 00:21:58.592 Multiple Update Detection Support: N/A 00:21:58.592 Firmware Update Granularity: No Information Provided 00:21:58.592 Per-Namespace SMART Log: No 00:21:58.592 Asymmetric Namespace Access Log Page: Not Supported 00:21:58.592 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:58.592 Command Effects Log Page: Not Supported 00:21:58.592 Get Log Page Extended Data: Supported 00:21:58.592 Telemetry Log Pages: Not Supported 00:21:58.592 Persistent Event Log Pages: Not Supported 00:21:58.592 Supported Log Pages Log Page: May Support 00:21:58.592 Commands Supported & Effects Log Page: Not Supported 00:21:58.592 Feature Identifiers & Effects Log Page:May Support 00:21:58.592 NVMe-MI Commands & Effects Log Page: May Support 00:21:58.592 Data Area 4 for Telemetry Log: Not Supported 00:21:58.592 Error Log Page Entries Supported: 128 00:21:58.592 Keep Alive: Not Supported 00:21:58.592 00:21:58.592 NVM Command Set Attributes 00:21:58.592 ========================== 00:21:58.592 Submission Queue Entry Size 00:21:58.592 Max: 1 00:21:58.592 Min: 1 00:21:58.592 Completion Queue Entry Size 00:21:58.592 Max: 1 00:21:58.592 Min: 1 00:21:58.592 Number of Namespaces: 0 00:21:58.592 Compare Command: Not Supported 00:21:58.592 Write Uncorrectable Command: Not Supported 00:21:58.592 Dataset Management Command: Not Supported 00:21:58.592 Write Zeroes Command: Not Supported 00:21:58.592 Set Features Save Field: Not Supported 00:21:58.592 Reservations: Not Supported 00:21:58.592 Timestamp: Not Supported 00:21:58.592 Copy: Not Supported 00:21:58.592 Volatile Write Cache: Not Present 00:21:58.592 Atomic Write Unit (Normal): 1 00:21:58.592 Atomic Write Unit (PFail): 1 00:21:58.592 Atomic Compare & Write Unit: 1 00:21:58.592 Fused Compare & Write: Supported 00:21:58.592 Scatter-Gather List 00:21:58.592 SGL Command Set: Supported 00:21:58.592 SGL Keyed: Supported 00:21:58.592 SGL Bit Bucket Descriptor: Not Supported 00:21:58.592 SGL Metadata Pointer: Not Supported 00:21:58.592 Oversized SGL: Not Supported 00:21:58.592 SGL Metadata Address: Not Supported 00:21:58.592 SGL Offset: Supported 00:21:58.592 Transport SGL Data Block: Not Supported 00:21:58.592 Replay Protected Memory Block: Not Supported 00:21:58.592 00:21:58.592 Firmware Slot Information 00:21:58.592 ========================= 00:21:58.592 Active slot: 0 00:21:58.592 00:21:58.592 00:21:58.592 Error Log 00:21:58.592 ========= 00:21:58.592 00:21:58.592 Active Namespaces 00:21:58.592 ================= 00:21:58.592 Discovery Log Page 00:21:58.592 ================== 00:21:58.592 Generation Counter: 2 00:21:58.592 Number of Records: 2 00:21:58.592 Record Format: 0 00:21:58.592 00:21:58.592 Discovery Log Entry 0 00:21:58.592 ---------------------- 00:21:58.592 Transport Type: 3 (TCP) 00:21:58.592 Address Family: 1 (IPv4) 00:21:58.592 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:58.592 Entry Flags: 00:21:58.592 Duplicate Returned Information: 1 00:21:58.592 Explicit Persistent Connection Support for Discovery: 1 00:21:58.592 Transport Requirements: 00:21:58.592 Secure Channel: Not Required 00:21:58.592 Port ID: 0 (0x0000) 00:21:58.592 Controller ID: 65535 (0xffff) 00:21:58.592 Admin Max SQ Size: 128 00:21:58.592 Transport Service Identifier: 4420 00:21:58.593 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:58.593 Transport Address: 10.0.0.2 00:21:58.593 Discovery Log Entry 1 00:21:58.593 ---------------------- 00:21:58.593 Transport Type: 3 (TCP) 00:21:58.593 Address Family: 1 (IPv4) 00:21:58.593 Subsystem Type: 2 (NVM Subsystem) 00:21:58.593 Entry Flags: 00:21:58.593 Duplicate Returned Information: 0 00:21:58.593 Explicit Persistent Connection Support for Discovery: 0 00:21:58.593 Transport Requirements: 00:21:58.593 Secure Channel: Not Required 00:21:58.593 Port ID: 0 (0x0000) 00:21:58.593 Controller ID: 65535 (0xffff) 00:21:58.593 Admin Max SQ Size: 128 00:21:58.593 Transport Service Identifier: 4420 00:21:58.593 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:58.593 Transport Address: 10.0.0.2 [2024-11-20 09:12:35.603492] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:58.593 [2024-11-20 09:12:35.603515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4100) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.603529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.593 [2024-11-20 09:12:35.603539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4280) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.603547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.593 [2024-11-20 09:12:35.603555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4400) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.603562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.593 [2024-11-20 09:12:35.603570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.603578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.593 [2024-11-20 09:12:35.603596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.603620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.603627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.593 [2024-11-20 09:12:35.603638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.593 [2024-11-20 09:12:35.603677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.593 [2024-11-20 09:12:35.603763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.593 [2024-11-20 09:12:35.603776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.593 [2024-11-20 09:12:35.603782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.603789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.603816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.603828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.603835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.593 [2024-11-20 09:12:35.603846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.593 [2024-11-20 09:12:35.603873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.593 [2024-11-20 09:12:35.603967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.593 [2024-11-20 09:12:35.603981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.593 [2024-11-20 09:12:35.603988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.603995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.604004] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:58.593 [2024-11-20 09:12:35.604011] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:58.593 [2024-11-20 09:12:35.604028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.593 [2024-11-20 09:12:35.604053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.593 [2024-11-20 09:12:35.604074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.593 [2024-11-20 09:12:35.604152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.593 [2024-11-20 09:12:35.604165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.593 [2024-11-20 09:12:35.604172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.604196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.593 [2024-11-20 09:12:35.604222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.593 [2024-11-20 09:12:35.604242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.593 [2024-11-20 09:12:35.604315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.593 [2024-11-20 09:12:35.604329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.593 [2024-11-20 09:12:35.604336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.604358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.593 [2024-11-20 09:12:35.604384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.593 [2024-11-20 09:12:35.604405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.593 [2024-11-20 09:12:35.604475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.593 [2024-11-20 09:12:35.604487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.593 [2024-11-20 09:12:35.604494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.604522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.593 [2024-11-20 09:12:35.604547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.593 [2024-11-20 09:12:35.604568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.593 [2024-11-20 09:12:35.604640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.593 [2024-11-20 09:12:35.604652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.593 [2024-11-20 09:12:35.604659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.604681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.593 [2024-11-20 09:12:35.604707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.593 [2024-11-20 09:12:35.604727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.593 [2024-11-20 09:12:35.604800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.593 [2024-11-20 09:12:35.604812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.593 [2024-11-20 09:12:35.604819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.604841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.593 [2024-11-20 09:12:35.604867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.593 [2024-11-20 09:12:35.604886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.593 [2024-11-20 09:12:35.604956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.593 [2024-11-20 09:12:35.604968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.593 [2024-11-20 09:12:35.604975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.604981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.593 [2024-11-20 09:12:35.604997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.605006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.593 [2024-11-20 09:12:35.605012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.593 [2024-11-20 09:12:35.605022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.593 [2024-11-20 09:12:35.605043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.593 [2024-11-20 09:12:35.605118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.593 [2024-11-20 09:12:35.605131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.593 [2024-11-20 09:12:35.605137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605144] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.594 [2024-11-20 09:12:35.605164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.594 [2024-11-20 09:12:35.605191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.594 [2024-11-20 09:12:35.605211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.594 [2024-11-20 09:12:35.605281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.594 [2024-11-20 09:12:35.605293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.594 [2024-11-20 09:12:35.605299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.594 [2024-11-20 09:12:35.605331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.594 [2024-11-20 09:12:35.605357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.594 [2024-11-20 09:12:35.605378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.594 [2024-11-20 09:12:35.605448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.594 [2024-11-20 09:12:35.605460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.594 [2024-11-20 09:12:35.605467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.594 [2024-11-20 09:12:35.605489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.594 [2024-11-20 09:12:35.605515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.594 [2024-11-20 09:12:35.605535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.594 [2024-11-20 09:12:35.605605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.594 [2024-11-20 09:12:35.605618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.594 [2024-11-20 09:12:35.605624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.594 [2024-11-20 09:12:35.605647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.594 [2024-11-20 09:12:35.605672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.594 [2024-11-20 09:12:35.605692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.594 [2024-11-20 09:12:35.605764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.594 [2024-11-20 09:12:35.605776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.594 [2024-11-20 09:12:35.605783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.594 [2024-11-20 09:12:35.605805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.594 [2024-11-20 09:12:35.605835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.594 [2024-11-20 09:12:35.605856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.594 [2024-11-20 09:12:35.605928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.594 [2024-11-20 09:12:35.605942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.594 [2024-11-20 09:12:35.605949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.594 [2024-11-20 09:12:35.605972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.605987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.594 [2024-11-20 09:12:35.605997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.594 [2024-11-20 09:12:35.606017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.594 [2024-11-20 09:12:35.606086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.594 [2024-11-20 09:12:35.606098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.594 [2024-11-20 09:12:35.606105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.606112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.594 [2024-11-20 09:12:35.606127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.606136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.594 [2024-11-20 09:12:35.606143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.594 [2024-11-20 09:12:35.606153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.594 [2024-11-20 09:12:35.606173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.594 [2024-11-20 09:12:35.606247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.594 [2024-11-20 09:12:35.606261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.594 [2024-11-20 09:12:35.606267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.595 [2024-11-20 09:12:35.606290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.595 [2024-11-20 09:12:35.606328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.595 [2024-11-20 09:12:35.606349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.595 [2024-11-20 09:12:35.606423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.595 [2024-11-20 09:12:35.606435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.595 [2024-11-20 09:12:35.606442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.595 [2024-11-20 09:12:35.606465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.595 [2024-11-20 09:12:35.606495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.595 [2024-11-20 09:12:35.606516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.595 [2024-11-20 09:12:35.606589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.595 [2024-11-20 09:12:35.606601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.595 [2024-11-20 09:12:35.606608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.595 [2024-11-20 09:12:35.606630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.595 [2024-11-20 09:12:35.606656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.595 [2024-11-20 09:12:35.606676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.595 [2024-11-20 09:12:35.606745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.595 [2024-11-20 09:12:35.606758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.595 [2024-11-20 09:12:35.606764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.595 [2024-11-20 09:12:35.606787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.595 [2024-11-20 09:12:35.606812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.595 [2024-11-20 09:12:35.606832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.595 [2024-11-20 09:12:35.606905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.595 [2024-11-20 09:12:35.606917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.595 [2024-11-20 09:12:35.606924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.595 [2024-11-20 09:12:35.606946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.606961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.595 [2024-11-20 09:12:35.606971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.595 [2024-11-20 09:12:35.606992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.595 [2024-11-20 09:12:35.607070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.595 [2024-11-20 09:12:35.607084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.595 [2024-11-20 09:12:35.607091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.607097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.595 [2024-11-20 09:12:35.607113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.607122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.607129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.595 [2024-11-20 09:12:35.607143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.595 [2024-11-20 09:12:35.607165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.595 [2024-11-20 09:12:35.607237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.595 [2024-11-20 09:12:35.607249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.595 [2024-11-20 09:12:35.607256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.607263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.595 [2024-11-20 09:12:35.607279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.607287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.607294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1982690) 00:21:58.595 [2024-11-20 09:12:35.611327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.595 [2024-11-20 09:12:35.611358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e4580, cid 3, qid 0 00:21:58.595 [2024-11-20 09:12:35.611436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.595 [2024-11-20 09:12:35.611450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.595 [2024-11-20 09:12:35.611457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.595 [2024-11-20 09:12:35.611463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e4580) on tqpair=0x1982690 00:21:58.595 [2024-11-20 09:12:35.611477] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:58.859 00:21:58.859 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:58.859 [2024-11-20 09:12:35.646930] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:21:58.859 [2024-11-20 09:12:35.646978] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389598 ] 00:21:58.859 [2024-11-20 09:12:35.695068] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:58.859 [2024-11-20 09:12:35.695118] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:58.859 [2024-11-20 09:12:35.695129] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:58.859 [2024-11-20 09:12:35.695145] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:58.859 [2024-11-20 09:12:35.695159] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:58.859 [2024-11-20 09:12:35.698577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:58.859 [2024-11-20 09:12:35.698614] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc38690 0 00:21:58.859 [2024-11-20 09:12:35.706323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:58.859 [2024-11-20 09:12:35.706344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:58.859 [2024-11-20 09:12:35.706352] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:58.859 [2024-11-20 09:12:35.706358] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:58.859 [2024-11-20 09:12:35.706399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.706412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.706419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.859 [2024-11-20 09:12:35.706433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:58.859 [2024-11-20 09:12:35.706460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.859 [2024-11-20 09:12:35.713319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.859 [2024-11-20 09:12:35.713339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.859 [2024-11-20 09:12:35.713347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.713356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.859 [2024-11-20 09:12:35.713378] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:58.859 [2024-11-20 09:12:35.713393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:58.859 [2024-11-20 09:12:35.713403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:58.859 [2024-11-20 09:12:35.713424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.713436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.713443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.859 [2024-11-20 09:12:35.713455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.859 [2024-11-20 09:12:35.713481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.859 [2024-11-20 09:12:35.713666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.859 [2024-11-20 09:12:35.713682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.859 [2024-11-20 09:12:35.713689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.713696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.859 [2024-11-20 09:12:35.713707] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:58.859 [2024-11-20 09:12:35.713724] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:58.859 [2024-11-20 09:12:35.713737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.713745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.713753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.859 [2024-11-20 09:12:35.713767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.859 [2024-11-20 09:12:35.713790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.859 [2024-11-20 09:12:35.713904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.859 [2024-11-20 09:12:35.713922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.859 [2024-11-20 09:12:35.713929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.713936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.859 [2024-11-20 09:12:35.713945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:58.859 [2024-11-20 09:12:35.713961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:58.859 [2024-11-20 09:12:35.713977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.713990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.713997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.859 [2024-11-20 09:12:35.714008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.859 [2024-11-20 09:12:35.714032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.859 [2024-11-20 09:12:35.714112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.859 [2024-11-20 09:12:35.714127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.859 [2024-11-20 09:12:35.714134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.714141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.859 [2024-11-20 09:12:35.714151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:58.859 [2024-11-20 09:12:35.714172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.714182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.714188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.859 [2024-11-20 09:12:35.714199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.859 [2024-11-20 09:12:35.714225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.859 [2024-11-20 09:12:35.714366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.859 [2024-11-20 09:12:35.714382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.859 [2024-11-20 09:12:35.714389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.714397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.859 [2024-11-20 09:12:35.714408] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:58.859 [2024-11-20 09:12:35.714419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:58.859 [2024-11-20 09:12:35.714434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:58.859 [2024-11-20 09:12:35.714546] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:58.859 [2024-11-20 09:12:35.714558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:58.859 [2024-11-20 09:12:35.714571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.714597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.714604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.859 [2024-11-20 09:12:35.714615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.859 [2024-11-20 09:12:35.714637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.859 [2024-11-20 09:12:35.714782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.859 [2024-11-20 09:12:35.714797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.859 [2024-11-20 09:12:35.714805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.714812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.859 [2024-11-20 09:12:35.714822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:58.859 [2024-11-20 09:12:35.714848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.714858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.714865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.859 [2024-11-20 09:12:35.714879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.859 [2024-11-20 09:12:35.714903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.859 [2024-11-20 09:12:35.714983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.859 [2024-11-20 09:12:35.714998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.859 [2024-11-20 09:12:35.715006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.859 [2024-11-20 09:12:35.715013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.859 [2024-11-20 09:12:35.715021] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:58.860 [2024-11-20 09:12:35.715034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.715049] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:58.860 [2024-11-20 09:12:35.715069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.715088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.860 [2024-11-20 09:12:35.715108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.860 [2024-11-20 09:12:35.715131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.860 [2024-11-20 09:12:35.715269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.860 [2024-11-20 09:12:35.715285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.860 [2024-11-20 09:12:35.715292] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715299] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc38690): datao=0, datal=4096, cccid=0 00:21:58.860 [2024-11-20 09:12:35.715317] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9a100) on tqpair(0xc38690): expected_datao=0, payload_size=4096 00:21:58.860 [2024-11-20 09:12:35.715326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715338] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715346] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.860 [2024-11-20 09:12:35.715379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.860 [2024-11-20 09:12:35.715386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.860 [2024-11-20 09:12:35.715405] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:58.860 [2024-11-20 09:12:35.715416] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:58.860 [2024-11-20 09:12:35.715426] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:58.860 [2024-11-20 09:12:35.715439] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:58.860 [2024-11-20 09:12:35.715448] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:58.860 [2024-11-20 09:12:35.715463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.715484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.715500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.860 [2024-11-20 09:12:35.715527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.860 [2024-11-20 09:12:35.715551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.860 [2024-11-20 09:12:35.715682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.860 [2024-11-20 09:12:35.715697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.860 [2024-11-20 09:12:35.715704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.860 [2024-11-20 09:12:35.715724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc38690) 00:21:58.860 [2024-11-20 09:12:35.715752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.860 [2024-11-20 09:12:35.715763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc38690) 00:21:58.860 [2024-11-20 09:12:35.715787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.860 [2024-11-20 09:12:35.715797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc38690) 00:21:58.860 [2024-11-20 09:12:35.715820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.860 [2024-11-20 09:12:35.715831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.860 [2024-11-20 09:12:35.715854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.860 [2024-11-20 09:12:35.715863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.715895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.715909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.715917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc38690) 00:21:58.860 [2024-11-20 09:12:35.715927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.860 [2024-11-20 09:12:35.715950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a100, cid 0, qid 0 00:21:58.860 [2024-11-20 09:12:35.715979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a280, cid 1, qid 0 00:21:58.860 [2024-11-20 09:12:35.715989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a400, cid 2, qid 0 00:21:58.860 [2024-11-20 09:12:35.715996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.860 [2024-11-20 09:12:35.716004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a700, cid 4, qid 0 00:21:58.860 [2024-11-20 09:12:35.716236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.860 [2024-11-20 09:12:35.716252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.860 [2024-11-20 09:12:35.716259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.716266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a700) on tqpair=0xc38690 00:21:58.860 [2024-11-20 09:12:35.716279] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:58.860 [2024-11-20 09:12:35.716301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.716328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.716345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.716357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.716365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.716372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc38690) 00:21:58.860 [2024-11-20 09:12:35.716383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.860 [2024-11-20 09:12:35.716406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a700, cid 4, qid 0 00:21:58.860 [2024-11-20 09:12:35.716538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.860 [2024-11-20 09:12:35.716554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.860 [2024-11-20 09:12:35.716561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.716568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a700) on tqpair=0xc38690 00:21:58.860 [2024-11-20 09:12:35.716648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.716673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:58.860 [2024-11-20 09:12:35.716691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.716699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc38690) 00:21:58.860 [2024-11-20 09:12:35.716726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.860 [2024-11-20 09:12:35.716748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a700, cid 4, qid 0 00:21:58.860 [2024-11-20 09:12:35.716894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.860 [2024-11-20 09:12:35.716909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.860 [2024-11-20 09:12:35.716916] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.716929] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc38690): datao=0, datal=4096, cccid=4 00:21:58.860 [2024-11-20 09:12:35.716943] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9a700) on tqpair(0xc38690): expected_datao=0, payload_size=4096 00:21:58.860 [2024-11-20 09:12:35.716956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.716976] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.716990] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.717007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.860 [2024-11-20 09:12:35.717017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.860 [2024-11-20 09:12:35.717024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.860 [2024-11-20 09:12:35.717031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a700) on tqpair=0xc38690 00:21:58.860 [2024-11-20 09:12:35.717049] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:58.860 [2024-11-20 09:12:35.717068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.717089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.717106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.717114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.717126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.717149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a700, cid 4, qid 0 00:21:58.861 [2024-11-20 09:12:35.717254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.861 [2024-11-20 09:12:35.717270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.861 [2024-11-20 09:12:35.717294] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721319] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc38690): datao=0, datal=4096, cccid=4 00:21:58.861 [2024-11-20 09:12:35.721333] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9a700) on tqpair(0xc38690): expected_datao=0, payload_size=4096 00:21:58.861 [2024-11-20 09:12:35.721341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721361] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721371] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.861 [2024-11-20 09:12:35.721393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.861 [2024-11-20 09:12:35.721401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a700) on tqpair=0xc38690 00:21:58.861 [2024-11-20 09:12:35.721431] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.721454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.721478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.721499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.721523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a700, cid 4, qid 0 00:21:58.861 [2024-11-20 09:12:35.721664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.861 [2024-11-20 09:12:35.721679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.861 [2024-11-20 09:12:35.721687] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721698] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc38690): datao=0, datal=4096, cccid=4 00:21:58.861 [2024-11-20 09:12:35.721707] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9a700) on tqpair(0xc38690): expected_datao=0, payload_size=4096 00:21:58.861 [2024-11-20 09:12:35.721720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721745] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721756] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.861 [2024-11-20 09:12:35.721788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.861 [2024-11-20 09:12:35.721795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a700) on tqpair=0xc38690 00:21:58.861 [2024-11-20 09:12:35.721817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.721834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.721853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.721866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.721876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.721885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.721894] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:58.861 [2024-11-20 09:12:35.721903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:58.861 [2024-11-20 09:12:35.721912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:58.861 [2024-11-20 09:12:35.721931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.721952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.721964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.721978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.721988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.861 [2024-11-20 09:12:35.722032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a700, cid 4, qid 0 00:21:58.861 [2024-11-20 09:12:35.722045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a880, cid 5, qid 0 00:21:58.861 [2024-11-20 09:12:35.722218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.861 [2024-11-20 09:12:35.722233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.861 [2024-11-20 09:12:35.722240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a700) on tqpair=0xc38690 00:21:58.861 [2024-11-20 09:12:35.722258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.861 [2024-11-20 09:12:35.722272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.861 [2024-11-20 09:12:35.722280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a880) on tqpair=0xc38690 00:21:58.861 [2024-11-20 09:12:35.722314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.722339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.722362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a880, cid 5, qid 0 00:21:58.861 [2024-11-20 09:12:35.722461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.861 [2024-11-20 09:12:35.722476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.861 [2024-11-20 09:12:35.722484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a880) on tqpair=0xc38690 00:21:58.861 [2024-11-20 09:12:35.722510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.722532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.722556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a880, cid 5, qid 0 00:21:58.861 [2024-11-20 09:12:35.722646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.861 [2024-11-20 09:12:35.722661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.861 [2024-11-20 09:12:35.722669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a880) on tqpair=0xc38690 00:21:58.861 [2024-11-20 09:12:35.722695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.722717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.722739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a880, cid 5, qid 0 00:21:58.861 [2024-11-20 09:12:35.722836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.861 [2024-11-20 09:12:35.722852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.861 [2024-11-20 09:12:35.722859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a880) on tqpair=0xc38690 00:21:58.861 [2024-11-20 09:12:35.722894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.722918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.722931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.722950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.722962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.722971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.722984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.722998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.861 [2024-11-20 09:12:35.723006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc38690) 00:21:58.861 [2024-11-20 09:12:35.723031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.861 [2024-11-20 09:12:35.723054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a880, cid 5, qid 0 00:21:58.861 [2024-11-20 09:12:35.723066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a700, cid 4, qid 0 00:21:58.862 [2024-11-20 09:12:35.723074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9aa00, cid 6, qid 0 00:21:58.862 [2024-11-20 09:12:35.723097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9ab80, cid 7, qid 0 00:21:58.862 [2024-11-20 09:12:35.723348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.862 [2024-11-20 09:12:35.723368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.862 [2024-11-20 09:12:35.723377] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723383] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc38690): datao=0, datal=8192, cccid=5 00:21:58.862 [2024-11-20 09:12:35.723393] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9a880) on tqpair(0xc38690): expected_datao=0, payload_size=8192 00:21:58.862 [2024-11-20 09:12:35.723407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723420] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723429] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.862 [2024-11-20 09:12:35.723447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.862 [2024-11-20 09:12:35.723454] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723460] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc38690): datao=0, datal=512, cccid=4 00:21:58.862 [2024-11-20 09:12:35.723468] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9a700) on tqpair(0xc38690): expected_datao=0, payload_size=512 00:21:58.862 [2024-11-20 09:12:35.723476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723486] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723493] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.862 [2024-11-20 09:12:35.723512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.862 [2024-11-20 09:12:35.723519] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723525] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc38690): datao=0, datal=512, cccid=6 00:21:58.862 [2024-11-20 09:12:35.723533] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9aa00) on tqpair(0xc38690): expected_datao=0, payload_size=512 00:21:58.862 [2024-11-20 09:12:35.723541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723550] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723558] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.862 [2024-11-20 09:12:35.723576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.862 [2024-11-20 09:12:35.723583] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723589] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc38690): datao=0, datal=4096, cccid=7 00:21:58.862 [2024-11-20 09:12:35.723601] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc9ab80) on tqpair(0xc38690): expected_datao=0, payload_size=4096 00:21:58.862 [2024-11-20 09:12:35.723610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723649] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.723658] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.764444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.862 [2024-11-20 09:12:35.764464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.862 [2024-11-20 09:12:35.764472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.764480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a880) on tqpair=0xc38690 00:21:58.862 [2024-11-20 09:12:35.764503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.862 [2024-11-20 09:12:35.764516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.862 [2024-11-20 09:12:35.764523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.764530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a700) on tqpair=0xc38690 00:21:58.862 [2024-11-20 09:12:35.764546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.862 [2024-11-20 09:12:35.764557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.862 [2024-11-20 09:12:35.764564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.764571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9aa00) on tqpair=0xc38690 00:21:58.862 [2024-11-20 09:12:35.764582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.862 [2024-11-20 09:12:35.764592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.862 [2024-11-20 09:12:35.764599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.862 [2024-11-20 09:12:35.764606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9ab80) on tqpair=0xc38690 00:21:58.862 ===================================================== 00:21:58.862 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.862 ===================================================== 00:21:58.862 Controller Capabilities/Features 00:21:58.862 ================================ 00:21:58.862 Vendor ID: 8086 00:21:58.862 Subsystem Vendor ID: 8086 00:21:58.862 Serial Number: SPDK00000000000001 00:21:58.862 Model Number: SPDK bdev Controller 00:21:58.862 Firmware Version: 25.01 00:21:58.862 Recommended Arb Burst: 6 00:21:58.862 IEEE OUI Identifier: e4 d2 5c 00:21:58.862 Multi-path I/O 00:21:58.862 May have multiple subsystem ports: Yes 00:21:58.862 May have multiple controllers: Yes 00:21:58.862 Associated with SR-IOV VF: No 00:21:58.862 Max Data Transfer Size: 131072 00:21:58.862 Max Number of Namespaces: 32 00:21:58.862 Max Number of I/O Queues: 127 00:21:58.862 NVMe Specification Version (VS): 1.3 00:21:58.862 NVMe Specification Version (Identify): 1.3 00:21:58.862 Maximum Queue Entries: 128 00:21:58.862 Contiguous Queues Required: Yes 00:21:58.862 Arbitration Mechanisms Supported 00:21:58.862 Weighted Round Robin: Not Supported 00:21:58.862 Vendor Specific: Not Supported 00:21:58.862 Reset Timeout: 15000 ms 00:21:58.862 Doorbell Stride: 4 bytes 00:21:58.862 NVM Subsystem Reset: Not Supported 00:21:58.862 Command Sets Supported 00:21:58.862 NVM Command Set: Supported 00:21:58.862 Boot Partition: Not Supported 00:21:58.862 Memory Page Size Minimum: 4096 bytes 00:21:58.862 Memory Page Size Maximum: 4096 bytes 00:21:58.862 Persistent Memory Region: Not Supported 00:21:58.862 Optional Asynchronous Events Supported 00:21:58.862 Namespace Attribute Notices: Supported 00:21:58.862 Firmware Activation Notices: Not Supported 00:21:58.862 ANA Change Notices: Not Supported 00:21:58.862 PLE Aggregate Log Change Notices: Not Supported 00:21:58.862 LBA Status Info Alert Notices: Not Supported 00:21:58.862 EGE Aggregate Log Change Notices: Not Supported 00:21:58.862 Normal NVM Subsystem Shutdown event: Not Supported 00:21:58.862 Zone Descriptor Change Notices: Not Supported 00:21:58.862 Discovery Log Change Notices: Not Supported 00:21:58.862 Controller Attributes 00:21:58.862 128-bit Host Identifier: Supported 00:21:58.862 Non-Operational Permissive Mode: Not Supported 00:21:58.862 NVM Sets: Not Supported 00:21:58.862 Read Recovery Levels: Not Supported 00:21:58.862 Endurance Groups: Not Supported 00:21:58.862 Predictable Latency Mode: Not Supported 00:21:58.862 Traffic Based Keep ALive: Not Supported 00:21:58.862 Namespace Granularity: Not Supported 00:21:58.862 SQ Associations: Not Supported 00:21:58.862 UUID List: Not Supported 00:21:58.862 Multi-Domain Subsystem: Not Supported 00:21:58.862 Fixed Capacity Management: Not Supported 00:21:58.862 Variable Capacity Management: Not Supported 00:21:58.862 Delete Endurance Group: Not Supported 00:21:58.862 Delete NVM Set: Not Supported 00:21:58.862 Extended LBA Formats Supported: Not Supported 00:21:58.862 Flexible Data Placement Supported: Not Supported 00:21:58.862 00:21:58.862 Controller Memory Buffer Support 00:21:58.862 ================================ 00:21:58.862 Supported: No 00:21:58.862 00:21:58.862 Persistent Memory Region Support 00:21:58.862 ================================ 00:21:58.862 Supported: No 00:21:58.862 00:21:58.862 Admin Command Set Attributes 00:21:58.862 ============================ 00:21:58.862 Security Send/Receive: Not Supported 00:21:58.862 Format NVM: Not Supported 00:21:58.862 Firmware Activate/Download: Not Supported 00:21:58.862 Namespace Management: Not Supported 00:21:58.862 Device Self-Test: Not Supported 00:21:58.862 Directives: Not Supported 00:21:58.862 NVMe-MI: Not Supported 00:21:58.862 Virtualization Management: Not Supported 00:21:58.862 Doorbell Buffer Config: Not Supported 00:21:58.862 Get LBA Status Capability: Not Supported 00:21:58.862 Command & Feature Lockdown Capability: Not Supported 00:21:58.862 Abort Command Limit: 4 00:21:58.862 Async Event Request Limit: 4 00:21:58.862 Number of Firmware Slots: N/A 00:21:58.862 Firmware Slot 1 Read-Only: N/A 00:21:58.862 Firmware Activation Without Reset: N/A 00:21:58.862 Multiple Update Detection Support: N/A 00:21:58.862 Firmware Update Granularity: No Information Provided 00:21:58.862 Per-Namespace SMART Log: No 00:21:58.862 Asymmetric Namespace Access Log Page: Not Supported 00:21:58.862 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:58.862 Command Effects Log Page: Supported 00:21:58.862 Get Log Page Extended Data: Supported 00:21:58.862 Telemetry Log Pages: Not Supported 00:21:58.862 Persistent Event Log Pages: Not Supported 00:21:58.862 Supported Log Pages Log Page: May Support 00:21:58.862 Commands Supported & Effects Log Page: Not Supported 00:21:58.863 Feature Identifiers & Effects Log Page:May Support 00:21:58.863 NVMe-MI Commands & Effects Log Page: May Support 00:21:58.863 Data Area 4 for Telemetry Log: Not Supported 00:21:58.863 Error Log Page Entries Supported: 128 00:21:58.863 Keep Alive: Supported 00:21:58.863 Keep Alive Granularity: 10000 ms 00:21:58.863 00:21:58.863 NVM Command Set Attributes 00:21:58.863 ========================== 00:21:58.863 Submission Queue Entry Size 00:21:58.863 Max: 64 00:21:58.863 Min: 64 00:21:58.863 Completion Queue Entry Size 00:21:58.863 Max: 16 00:21:58.863 Min: 16 00:21:58.863 Number of Namespaces: 32 00:21:58.863 Compare Command: Supported 00:21:58.863 Write Uncorrectable Command: Not Supported 00:21:58.863 Dataset Management Command: Supported 00:21:58.863 Write Zeroes Command: Supported 00:21:58.863 Set Features Save Field: Not Supported 00:21:58.863 Reservations: Supported 00:21:58.863 Timestamp: Not Supported 00:21:58.863 Copy: Supported 00:21:58.863 Volatile Write Cache: Present 00:21:58.863 Atomic Write Unit (Normal): 1 00:21:58.863 Atomic Write Unit (PFail): 1 00:21:58.863 Atomic Compare & Write Unit: 1 00:21:58.863 Fused Compare & Write: Supported 00:21:58.863 Scatter-Gather List 00:21:58.863 SGL Command Set: Supported 00:21:58.863 SGL Keyed: Supported 00:21:58.863 SGL Bit Bucket Descriptor: Not Supported 00:21:58.863 SGL Metadata Pointer: Not Supported 00:21:58.863 Oversized SGL: Not Supported 00:21:58.863 SGL Metadata Address: Not Supported 00:21:58.863 SGL Offset: Supported 00:21:58.863 Transport SGL Data Block: Not Supported 00:21:58.863 Replay Protected Memory Block: Not Supported 00:21:58.863 00:21:58.863 Firmware Slot Information 00:21:58.863 ========================= 00:21:58.863 Active slot: 1 00:21:58.863 Slot 1 Firmware Revision: 25.01 00:21:58.863 00:21:58.863 00:21:58.863 Commands Supported and Effects 00:21:58.863 ============================== 00:21:58.863 Admin Commands 00:21:58.863 -------------- 00:21:58.863 Get Log Page (02h): Supported 00:21:58.863 Identify (06h): Supported 00:21:58.863 Abort (08h): Supported 00:21:58.863 Set Features (09h): Supported 00:21:58.863 Get Features (0Ah): Supported 00:21:58.863 Asynchronous Event Request (0Ch): Supported 00:21:58.863 Keep Alive (18h): Supported 00:21:58.863 I/O Commands 00:21:58.863 ------------ 00:21:58.863 Flush (00h): Supported LBA-Change 00:21:58.863 Write (01h): Supported LBA-Change 00:21:58.863 Read (02h): Supported 00:21:58.863 Compare (05h): Supported 00:21:58.863 Write Zeroes (08h): Supported LBA-Change 00:21:58.863 Dataset Management (09h): Supported LBA-Change 00:21:58.863 Copy (19h): Supported LBA-Change 00:21:58.863 00:21:58.863 Error Log 00:21:58.863 ========= 00:21:58.863 00:21:58.863 Arbitration 00:21:58.863 =========== 00:21:58.863 Arbitration Burst: 1 00:21:58.863 00:21:58.863 Power Management 00:21:58.863 ================ 00:21:58.863 Number of Power States: 1 00:21:58.863 Current Power State: Power State #0 00:21:58.863 Power State #0: 00:21:58.863 Max Power: 0.00 W 00:21:58.863 Non-Operational State: Operational 00:21:58.863 Entry Latency: Not Reported 00:21:58.863 Exit Latency: Not Reported 00:21:58.863 Relative Read Throughput: 0 00:21:58.863 Relative Read Latency: 0 00:21:58.863 Relative Write Throughput: 0 00:21:58.863 Relative Write Latency: 0 00:21:58.863 Idle Power: Not Reported 00:21:58.863 Active Power: Not Reported 00:21:58.863 Non-Operational Permissive Mode: Not Supported 00:21:58.863 00:21:58.863 Health Information 00:21:58.863 ================== 00:21:58.863 Critical Warnings: 00:21:58.863 Available Spare Space: OK 00:21:58.863 Temperature: OK 00:21:58.863 Device Reliability: OK 00:21:58.863 Read Only: No 00:21:58.863 Volatile Memory Backup: OK 00:21:58.863 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:58.863 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:58.863 Available Spare: 0% 00:21:58.863 Available Spare Threshold: 0% 00:21:58.863 Life Percentage Used:[2024-11-20 09:12:35.764741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.863 [2024-11-20 09:12:35.764754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc38690) 00:21:58.863 [2024-11-20 09:12:35.764766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.863 [2024-11-20 09:12:35.764789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9ab80, cid 7, qid 0 00:21:58.863 [2024-11-20 09:12:35.764904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.863 [2024-11-20 09:12:35.764920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.863 [2024-11-20 09:12:35.764927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.863 [2024-11-20 09:12:35.764934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9ab80) on tqpair=0xc38690 00:21:58.863 [2024-11-20 09:12:35.764982] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:58.863 [2024-11-20 09:12:35.765005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a100) on tqpair=0xc38690 00:21:58.863 [2024-11-20 09:12:35.765019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.863 [2024-11-20 09:12:35.765028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a280) on tqpair=0xc38690 00:21:58.863 [2024-11-20 09:12:35.765036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.863 [2024-11-20 09:12:35.765045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a400) on tqpair=0xc38690 00:21:58.863 [2024-11-20 09:12:35.765052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.863 [2024-11-20 09:12:35.765061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.863 [2024-11-20 09:12:35.765073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.863 [2024-11-20 09:12:35.765087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.863 [2024-11-20 09:12:35.765095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.863 [2024-11-20 09:12:35.765117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.863 [2024-11-20 09:12:35.765128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.863 [2024-11-20 09:12:35.765151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.863 [2024-11-20 09:12:35.765283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.863 [2024-11-20 09:12:35.765298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.863 [2024-11-20 09:12:35.765316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.863 [2024-11-20 09:12:35.765324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.863 [2024-11-20 09:12:35.765336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.863 [2024-11-20 09:12:35.765345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.863 [2024-11-20 09:12:35.765351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.863 [2024-11-20 09:12:35.765363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.863 [2024-11-20 09:12:35.765393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.863 [2024-11-20 09:12:35.765488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.863 [2024-11-20 09:12:35.765503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.863 [2024-11-20 09:12:35.765510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.863 [2024-11-20 09:12:35.765517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.863 [2024-11-20 09:12:35.765526] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:58.863 [2024-11-20 09:12:35.765538] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:58.863 [2024-11-20 09:12:35.765555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.863 [2024-11-20 09:12:35.765565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.765571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.765586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.765609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.765687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.765702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.765709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.765716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.765734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.765745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.765752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.765763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.765786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.765866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.765885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.765893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.765900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.765919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.765931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.765937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.765948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.765970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.766051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.766068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.766076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.766100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.766131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.766153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.766251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.766266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.766273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.766300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.766341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.766367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.766449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.766464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.766471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.766497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.766526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.766549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.766626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.766641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.766655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.766682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.766711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.766733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.766809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.766824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.766831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.766856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.766874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.766885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.766908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.766985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.767000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.767007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.767032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.767061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.767083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.767177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.767192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.767200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.767226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.767255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.767276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.767375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.767391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.767398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.767432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.767461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.767486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.767562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.767577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.767584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.767610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.864 [2024-11-20 09:12:35.767638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.864 [2024-11-20 09:12:35.767660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.864 [2024-11-20 09:12:35.767737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.864 [2024-11-20 09:12:35.767752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.864 [2024-11-20 09:12:35.767759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.864 [2024-11-20 09:12:35.767785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.864 [2024-11-20 09:12:35.767796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.767803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.865 [2024-11-20 09:12:35.767814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.865 [2024-11-20 09:12:35.767836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.865 [2024-11-20 09:12:35.767915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.865 [2024-11-20 09:12:35.767930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.865 [2024-11-20 09:12:35.767937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.767944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.865 [2024-11-20 09:12:35.767962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.767974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.767980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.865 [2024-11-20 09:12:35.767991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.865 [2024-11-20 09:12:35.768014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.865 [2024-11-20 09:12:35.768091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.865 [2024-11-20 09:12:35.768106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.865 [2024-11-20 09:12:35.768113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.768120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.865 [2024-11-20 09:12:35.768144] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.768155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.768162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.865 [2024-11-20 09:12:35.768173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.865 [2024-11-20 09:12:35.768197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.865 [2024-11-20 09:12:35.771315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.865 [2024-11-20 09:12:35.771333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.865 [2024-11-20 09:12:35.771341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.771348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.865 [2024-11-20 09:12:35.771368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.771379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.771386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc38690) 00:21:58.865 [2024-11-20 09:12:35.771397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.865 [2024-11-20 09:12:35.771420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc9a580, cid 3, qid 0 00:21:58.865 [2024-11-20 09:12:35.771568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.865 [2024-11-20 09:12:35.771583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.865 [2024-11-20 09:12:35.771591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.865 [2024-11-20 09:12:35.771598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc9a580) on tqpair=0xc38690 00:21:58.865 [2024-11-20 09:12:35.771614] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:21:58.865 0% 00:21:58.865 Data Units Read: 0 00:21:58.865 Data Units Written: 0 00:21:58.865 Host Read Commands: 0 00:21:58.865 Host Write Commands: 0 00:21:58.865 Controller Busy Time: 0 minutes 00:21:58.865 Power Cycles: 0 00:21:58.865 Power On Hours: 0 hours 00:21:58.865 Unsafe Shutdowns: 0 00:21:58.865 Unrecoverable Media Errors: 0 00:21:58.865 Lifetime Error Log Entries: 0 00:21:58.865 Warning Temperature Time: 0 minutes 00:21:58.865 Critical Temperature Time: 0 minutes 00:21:58.865 00:21:58.865 Number of Queues 00:21:58.865 ================ 00:21:58.865 Number of I/O Submission Queues: 127 00:21:58.865 Number of I/O Completion Queues: 127 00:21:58.865 00:21:58.865 Active Namespaces 00:21:58.865 ================= 00:21:58.865 Namespace ID:1 00:21:58.865 Error Recovery Timeout: Unlimited 00:21:58.865 Command Set Identifier: NVM (00h) 00:21:58.865 Deallocate: Supported 00:21:58.865 Deallocated/Unwritten Error: Not Supported 00:21:58.865 Deallocated Read Value: Unknown 00:21:58.865 Deallocate in Write Zeroes: Not Supported 00:21:58.865 Deallocated Guard Field: 0xFFFF 00:21:58.865 Flush: Supported 00:21:58.865 Reservation: Supported 00:21:58.865 Namespace Sharing Capabilities: Multiple Controllers 00:21:58.865 Size (in LBAs): 131072 (0GiB) 00:21:58.865 Capacity (in LBAs): 131072 (0GiB) 00:21:58.865 Utilization (in LBAs): 131072 (0GiB) 00:21:58.865 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:58.865 EUI64: ABCDEF0123456789 00:21:58.865 UUID: 6e24fadb-3c4f-4288-851b-fca1f6b58ed4 00:21:58.865 Thin Provisioning: Not Supported 00:21:58.865 Per-NS Atomic Units: Yes 00:21:58.865 Atomic Boundary Size (Normal): 0 00:21:58.865 Atomic Boundary Size (PFail): 0 00:21:58.865 Atomic Boundary Offset: 0 00:21:58.865 Maximum Single Source Range Length: 65535 00:21:58.865 Maximum Copy Length: 65535 00:21:58.865 Maximum Source Range Count: 1 00:21:58.865 NGUID/EUI64 Never Reused: No 00:21:58.865 Namespace Write Protected: No 00:21:58.865 Number of LBA Formats: 1 00:21:58.865 Current LBA Format: LBA Format #00 00:21:58.865 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:58.865 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.865 rmmod nvme_tcp 00:21:58.865 rmmod nvme_fabrics 00:21:58.865 rmmod nvme_keyring 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3389562 ']' 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3389562 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3389562 ']' 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3389562 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:58.865 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3389562 00:21:59.123 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:59.123 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:59.123 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3389562' 00:21:59.123 killing process with pid 3389562 00:21:59.123 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3389562 00:21:59.123 09:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3389562 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.123 09:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.654 00:22:01.654 real 0m5.669s 00:22:01.654 user 0m4.721s 00:22:01.654 sys 0m2.026s 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.654 ************************************ 00:22:01.654 END TEST nvmf_identify 00:22:01.654 ************************************ 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.654 ************************************ 00:22:01.654 START TEST nvmf_perf 00:22:01.654 ************************************ 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:01.654 * Looking for test storage... 00:22:01.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:01.654 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:01.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.655 --rc genhtml_branch_coverage=1 00:22:01.655 --rc genhtml_function_coverage=1 00:22:01.655 --rc genhtml_legend=1 00:22:01.655 --rc geninfo_all_blocks=1 00:22:01.655 --rc geninfo_unexecuted_blocks=1 00:22:01.655 00:22:01.655 ' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:01.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.655 --rc genhtml_branch_coverage=1 00:22:01.655 --rc genhtml_function_coverage=1 00:22:01.655 --rc genhtml_legend=1 00:22:01.655 --rc geninfo_all_blocks=1 00:22:01.655 --rc geninfo_unexecuted_blocks=1 00:22:01.655 00:22:01.655 ' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:01.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.655 --rc genhtml_branch_coverage=1 00:22:01.655 --rc genhtml_function_coverage=1 00:22:01.655 --rc genhtml_legend=1 00:22:01.655 --rc geninfo_all_blocks=1 00:22:01.655 --rc geninfo_unexecuted_blocks=1 00:22:01.655 00:22:01.655 ' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:01.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.655 --rc genhtml_branch_coverage=1 00:22:01.655 --rc genhtml_function_coverage=1 00:22:01.655 --rc genhtml_legend=1 00:22:01.655 --rc geninfo_all_blocks=1 00:22:01.655 --rc geninfo_unexecuted_blocks=1 00:22:01.655 00:22:01.655 ' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.655 09:12:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:03.558 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:03.558 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:03.558 Found net devices under 0000:09:00.0: cvl_0_0 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:03.558 Found net devices under 0000:09:00.1: cvl_0_1 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:03.558 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:03.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:22:03.816 00:22:03.816 --- 10.0.0.2 ping statistics --- 00:22:03.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.816 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:22:03.816 00:22:03.816 --- 10.0.0.1 ping statistics --- 00:22:03.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.816 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.816 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:03.817 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3391652 00:22:03.817 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3391652 00:22:03.817 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3391652 ']' 00:22:03.817 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.817 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:03.817 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:03.817 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.817 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:03.817 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:03.817 [2024-11-20 09:12:40.715519] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:22:03.817 [2024-11-20 09:12:40.715598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.817 [2024-11-20 09:12:40.793404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.075 [2024-11-20 09:12:40.857172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.075 [2024-11-20 09:12:40.857219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.075 [2024-11-20 09:12:40.857233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.075 [2024-11-20 09:12:40.857244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.075 [2024-11-20 09:12:40.857254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.075 [2024-11-20 09:12:40.858866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.075 [2024-11-20 09:12:40.859020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.075 [2024-11-20 09:12:40.859075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.075 [2024-11-20 09:12:40.859078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.075 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:04.075 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:22:04.075 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.075 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:04.075 09:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:04.075 09:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.075 09:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:04.075 09:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:07.351 09:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:07.351 09:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:07.609 09:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:22:07.609 09:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:07.865 09:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:07.865 09:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:22:07.865 09:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:07.865 09:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:07.865 09:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:08.146 [2024-11-20 09:12:44.983904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.146 09:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:08.403 09:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:08.403 09:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:08.660 09:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:08.660 09:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:08.917 09:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.174 [2024-11-20 09:12:46.063782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.174 09:12:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:09.431 09:12:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:22:09.431 09:12:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:09.431 09:12:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:09.431 09:12:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:10.800 Initializing NVMe Controllers 00:22:10.800 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:22:10.800 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:22:10.800 Initialization complete. Launching workers. 00:22:10.800 ======================================================== 00:22:10.800 Latency(us) 00:22:10.800 Device Information : IOPS MiB/s Average min max 00:22:10.800 PCIE (0000:0b:00.0) NSID 1 from core 0: 85145.16 332.60 375.19 35.50 5174.17 00:22:10.800 ======================================================== 00:22:10.800 Total : 85145.16 332.60 375.19 35.50 5174.17 00:22:10.800 00:22:10.800 09:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:12.219 Initializing NVMe Controllers 00:22:12.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:12.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:12.219 Initialization complete. Launching workers. 00:22:12.219 ======================================================== 00:22:12.219 Latency(us) 00:22:12.219 Device Information : IOPS MiB/s Average min max 00:22:12.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 146.48 0.57 7051.07 137.40 44959.81 00:22:12.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.80 0.22 17919.60 7952.76 47902.74 00:22:12.219 ======================================================== 00:22:12.219 Total : 202.29 0.79 10049.29 137.40 47902.74 00:22:12.219 00:22:12.219 09:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:13.612 Initializing NVMe Controllers 00:22:13.612 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:13.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:13.612 Initialization complete. Launching workers. 00:22:13.612 ======================================================== 00:22:13.612 Latency(us) 00:22:13.612 Device Information : IOPS MiB/s Average min max 00:22:13.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8447.17 33.00 3788.41 684.31 10710.35 00:22:13.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3858.88 15.07 8363.65 5169.72 47801.48 00:22:13.612 ======================================================== 00:22:13.612 Total : 12306.04 48.07 5223.09 684.31 47801.48 00:22:13.612 00:22:13.612 09:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:13.612 09:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:13.612 09:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:16.134 Initializing NVMe Controllers 00:22:16.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.134 Controller IO queue size 128, less than required. 00:22:16.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.134 Controller IO queue size 128, less than required. 00:22:16.134 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:16.134 Initialization complete. Launching workers. 00:22:16.134 ======================================================== 00:22:16.134 Latency(us) 00:22:16.134 Device Information : IOPS MiB/s Average min max 00:22:16.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1711.50 427.87 76751.25 50631.62 117976.59 00:22:16.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 585.50 146.37 221626.54 124694.86 347486.53 00:22:16.134 ======================================================== 00:22:16.134 Total : 2297.00 574.25 113679.63 50631.62 347486.53 00:22:16.134 00:22:16.134 09:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:16.391 No valid NVMe controllers or AIO or URING devices found 00:22:16.391 Initializing NVMe Controllers 00:22:16.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.391 Controller IO queue size 128, less than required. 00:22:16.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.392 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:16.392 Controller IO queue size 128, less than required. 00:22:16.392 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.392 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:16.392 WARNING: Some requested NVMe devices were skipped 00:22:16.392 09:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:18.917 Initializing NVMe Controllers 00:22:18.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.917 Controller IO queue size 128, less than required. 00:22:18.917 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.917 Controller IO queue size 128, less than required. 00:22:18.917 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:18.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:18.917 Initialization complete. Launching workers. 00:22:18.917 00:22:18.917 ==================== 00:22:18.917 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:18.917 TCP transport: 00:22:18.917 polls: 9908 00:22:18.917 idle_polls: 6836 00:22:18.917 sock_completions: 3072 00:22:18.917 nvme_completions: 5465 00:22:18.917 submitted_requests: 8112 00:22:18.917 queued_requests: 1 00:22:18.917 00:22:18.917 ==================== 00:22:18.917 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:18.917 TCP transport: 00:22:18.917 polls: 10045 00:22:18.917 idle_polls: 5903 00:22:18.917 sock_completions: 4142 00:22:18.917 nvme_completions: 6521 00:22:18.917 submitted_requests: 9682 00:22:18.917 queued_requests: 1 00:22:18.917 ======================================================== 00:22:18.917 Latency(us) 00:22:18.917 Device Information : IOPS MiB/s Average min max 00:22:18.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1365.73 341.43 96680.91 47971.50 161613.30 00:22:18.917 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1629.67 407.42 78877.38 43029.69 104847.10 00:22:18.917 ======================================================== 00:22:18.917 Total : 2995.40 748.85 86994.74 43029.69 161613.30 00:22:18.917 00:22:18.917 09:12:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:18.917 09:12:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:19.175 rmmod nvme_tcp 00:22:19.175 rmmod nvme_fabrics 00:22:19.175 rmmod nvme_keyring 00:22:19.175 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3391652 ']' 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3391652 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3391652 ']' 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3391652 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3391652 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3391652' 00:22:19.434 killing process with pid 3391652 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3391652 00:22:19.434 09:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3391652 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.806 09:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.337 00:22:23.337 real 0m21.607s 00:22:23.337 user 1m6.452s 00:22:23.337 sys 0m5.643s 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:23.337 ************************************ 00:22:23.337 END TEST nvmf_perf 00:22:23.337 ************************************ 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.337 ************************************ 00:22:23.337 START TEST nvmf_fio_host 00:22:23.337 ************************************ 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:23.337 * Looking for test storage... 00:22:23.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:23.337 09:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:23.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.337 --rc genhtml_branch_coverage=1 00:22:23.337 --rc genhtml_function_coverage=1 00:22:23.337 --rc genhtml_legend=1 00:22:23.337 --rc geninfo_all_blocks=1 00:22:23.337 --rc geninfo_unexecuted_blocks=1 00:22:23.337 00:22:23.337 ' 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:23.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.337 --rc genhtml_branch_coverage=1 00:22:23.337 --rc genhtml_function_coverage=1 00:22:23.337 --rc genhtml_legend=1 00:22:23.337 --rc geninfo_all_blocks=1 00:22:23.337 --rc geninfo_unexecuted_blocks=1 00:22:23.337 00:22:23.337 ' 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:23.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.337 --rc genhtml_branch_coverage=1 00:22:23.337 --rc genhtml_function_coverage=1 00:22:23.337 --rc genhtml_legend=1 00:22:23.337 --rc geninfo_all_blocks=1 00:22:23.337 --rc geninfo_unexecuted_blocks=1 00:22:23.337 00:22:23.337 ' 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:23.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.337 --rc genhtml_branch_coverage=1 00:22:23.337 --rc genhtml_function_coverage=1 00:22:23.337 --rc genhtml_legend=1 00:22:23.337 --rc geninfo_all_blocks=1 00:22:23.337 --rc geninfo_unexecuted_blocks=1 00:22:23.337 00:22:23.337 ' 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.337 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.338 09:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.238 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.238 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.238 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.238 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:25.239 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:25.239 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:25.239 Found net devices under 0000:09:00.0: cvl_0_0 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:25.239 Found net devices under 0000:09:00.1: cvl_0_1 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.239 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.497 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.497 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.497 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.497 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:22:25.498 00:22:25.498 --- 10.0.0.2 ping statistics --- 00:22:25.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.498 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:25.498 00:22:25.498 --- 10.0.0.1 ping statistics --- 00:22:25.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.498 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3395629 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3395629 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3395629 ']' 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:25.498 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.498 [2024-11-20 09:13:02.438423] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:22:25.498 [2024-11-20 09:13:02.438492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.498 [2024-11-20 09:13:02.507697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.756 [2024-11-20 09:13:02.566248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.756 [2024-11-20 09:13:02.566298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.756 [2024-11-20 09:13:02.566345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.756 [2024-11-20 09:13:02.566357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.756 [2024-11-20 09:13:02.566379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.756 [2024-11-20 09:13:02.567972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.756 [2024-11-20 09:13:02.568045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.756 [2024-11-20 09:13:02.568143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.756 [2024-11-20 09:13:02.568151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.756 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:25.756 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:22:25.756 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:26.013 [2024-11-20 09:13:02.924020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.013 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:26.013 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.013 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.013 09:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:26.270 Malloc1 00:22:26.270 09:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:26.834 09:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:26.834 09:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.092 [2024-11-20 09:13:04.084274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.092 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:27.656 09:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:27.656 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:27.656 fio-3.35 00:22:27.656 Starting 1 thread 00:22:30.180 00:22:30.180 test: (groupid=0, jobs=1): err= 0: pid=3395990: Wed Nov 20 09:13:06 2024 00:22:30.180 read: IOPS=8253, BW=32.2MiB/s (33.8MB/s)(64.7MiB/2006msec) 00:22:30.180 slat (nsec): min=1852, max=161195, avg=2525.96, stdev=1998.53 00:22:30.180 clat (usec): min=2646, max=13601, avg=8422.43, stdev=709.88 00:22:30.180 lat (usec): min=2675, max=13604, avg=8424.96, stdev=709.76 00:22:30.180 clat percentiles (usec): 00:22:30.180 | 1.00th=[ 6849], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7832], 00:22:30.180 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:22:30.180 | 70.00th=[ 8848], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:22:30.180 | 99.00th=[10028], 99.50th=[10159], 99.90th=[11600], 99.95th=[11994], 00:22:30.180 | 99.99th=[13566] 00:22:30.180 bw ( KiB/s): min=32320, max=33608, per=99.91%, avg=32984.00, stdev=575.15, samples=4 00:22:30.180 iops : min= 8080, max= 8402, avg=8246.00, stdev=143.79, samples=4 00:22:30.180 write: IOPS=8256, BW=32.2MiB/s (33.8MB/s)(64.7MiB/2006msec); 0 zone resets 00:22:30.180 slat (nsec): min=1990, max=135444, avg=2611.68, stdev=1355.97 00:22:30.180 clat (usec): min=1444, max=13528, avg=7019.33, stdev=591.93 00:22:30.180 lat (usec): min=1453, max=13530, avg=7021.94, stdev=591.88 00:22:30.180 clat percentiles (usec): 00:22:30.180 | 1.00th=[ 5735], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6587], 00:22:30.180 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:22:30.180 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7898], 00:22:30.180 | 99.00th=[ 8160], 99.50th=[ 8356], 99.90th=[11469], 99.95th=[11994], 00:22:30.180 | 99.99th=[13435] 00:22:30.180 bw ( KiB/s): min=32936, max=33112, per=99.92%, avg=32998.00, stdev=78.49, samples=4 00:22:30.180 iops : min= 8234, max= 8278, avg=8249.50, stdev=19.62, samples=4 00:22:30.180 lat (msec) : 2=0.02%, 4=0.13%, 10=99.23%, 20=0.62% 00:22:30.180 cpu : usr=65.79%, sys=32.67%, ctx=89, majf=0, minf=32 00:22:30.180 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:30.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:30.180 issued rwts: total=16556,16562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.180 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:30.180 00:22:30.180 Run status group 0 (all jobs): 00:22:30.180 READ: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=64.7MiB (67.8MB), run=2006-2006msec 00:22:30.180 WRITE: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=64.7MiB (67.8MB), run=2006-2006msec 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:30.180 09:13:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:30.180 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:30.180 fio-3.35 00:22:30.180 Starting 1 thread 00:22:32.706 00:22:32.706 test: (groupid=0, jobs=1): err= 0: pid=3396323: Wed Nov 20 09:13:09 2024 00:22:32.706 read: IOPS=7920, BW=124MiB/s (130MB/s)(249MiB/2008msec) 00:22:32.706 slat (nsec): min=2831, max=93890, avg=3571.12, stdev=1841.41 00:22:32.706 clat (usec): min=1978, max=19330, avg=9249.15, stdev=2292.99 00:22:32.706 lat (usec): min=1981, max=19334, avg=9252.72, stdev=2292.96 00:22:32.706 clat percentiles (usec): 00:22:32.706 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7439], 00:22:32.706 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9503], 00:22:32.706 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12256], 95.00th=[13435], 00:22:32.706 | 99.00th=[15795], 99.50th=[16319], 99.90th=[18220], 99.95th=[18744], 00:22:32.706 | 99.99th=[19268] 00:22:32.706 bw ( KiB/s): min=58848, max=69568, per=51.18%, avg=64856.00, stdev=4476.98, samples=4 00:22:32.706 iops : min= 3678, max= 4348, avg=4053.50, stdev=279.81, samples=4 00:22:32.706 write: IOPS=4646, BW=72.6MiB/s (76.1MB/s)(133MiB/1830msec); 0 zone resets 00:22:32.706 slat (usec): min=30, max=137, avg=33.91, stdev= 5.69 00:22:32.706 clat (usec): min=5398, max=19796, avg=12145.25, stdev=2134.26 00:22:32.706 lat (usec): min=5431, max=19827, avg=12179.16, stdev=2134.15 00:22:32.706 clat percentiles (usec): 00:22:32.706 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:22:32.706 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12649], 00:22:32.706 | 70.00th=[13304], 80.00th=[14091], 90.00th=[14877], 95.00th=[15795], 00:22:32.706 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19268], 99.95th=[19530], 00:22:32.706 | 99.99th=[19792] 00:22:32.706 bw ( KiB/s): min=61632, max=72384, per=90.94%, avg=67616.00, stdev=4554.80, samples=4 00:22:32.706 iops : min= 3852, max= 4524, avg=4226.00, stdev=284.68, samples=4 00:22:32.706 lat (msec) : 2=0.01%, 4=0.21%, 10=50.88%, 20=48.90% 00:22:32.706 cpu : usr=75.78%, sys=23.02%, ctx=45, majf=0, minf=54 00:22:32.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:32.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:32.706 issued rwts: total=15904,8504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:32.706 00:22:32.706 Run status group 0 (all jobs): 00:22:32.706 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2008-2008msec 00:22:32.706 WRITE: bw=72.6MiB/s (76.1MB/s), 72.6MiB/s-72.6MiB/s (76.1MB/s-76.1MB/s), io=133MiB (139MB), run=1830-1830msec 00:22:32.706 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.963 rmmod nvme_tcp 00:22:32.963 rmmod nvme_fabrics 00:22:32.963 rmmod nvme_keyring 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3395629 ']' 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3395629 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3395629 ']' 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3395629 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:32.963 09:13:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3395629 00:22:33.220 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:33.220 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:33.220 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3395629' 00:22:33.220 killing process with pid 3395629 00:22:33.220 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3395629 00:22:33.220 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3395629 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.478 09:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.379 09:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.379 00:22:35.379 real 0m12.455s 00:22:35.379 user 0m36.649s 00:22:35.379 sys 0m4.146s 00:22:35.379 09:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:35.379 09:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.379 ************************************ 00:22:35.379 END TEST nvmf_fio_host 00:22:35.379 ************************************ 00:22:35.379 09:13:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:35.379 09:13:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:35.379 09:13:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:35.379 09:13:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.379 ************************************ 00:22:35.379 START TEST nvmf_failover 00:22:35.379 ************************************ 00:22:35.379 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:35.638 * Looking for test storage... 00:22:35.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:35.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.638 --rc genhtml_branch_coverage=1 00:22:35.638 --rc genhtml_function_coverage=1 00:22:35.638 --rc genhtml_legend=1 00:22:35.638 --rc geninfo_all_blocks=1 00:22:35.638 --rc geninfo_unexecuted_blocks=1 00:22:35.638 00:22:35.638 ' 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:35.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.638 --rc genhtml_branch_coverage=1 00:22:35.638 --rc genhtml_function_coverage=1 00:22:35.638 --rc genhtml_legend=1 00:22:35.638 --rc geninfo_all_blocks=1 00:22:35.638 --rc geninfo_unexecuted_blocks=1 00:22:35.638 00:22:35.638 ' 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:35.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.638 --rc genhtml_branch_coverage=1 00:22:35.638 --rc genhtml_function_coverage=1 00:22:35.638 --rc genhtml_legend=1 00:22:35.638 --rc geninfo_all_blocks=1 00:22:35.638 --rc geninfo_unexecuted_blocks=1 00:22:35.638 00:22:35.638 ' 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:35.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.638 --rc genhtml_branch_coverage=1 00:22:35.638 --rc genhtml_function_coverage=1 00:22:35.638 --rc genhtml_legend=1 00:22:35.638 --rc geninfo_all_blocks=1 00:22:35.638 --rc geninfo_unexecuted_blocks=1 00:22:35.638 00:22:35.638 ' 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:35.638 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.639 09:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:38.167 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:38.167 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.167 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:38.168 Found net devices under 0000:09:00.0: cvl_0_0 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:38.168 Found net devices under 0000:09:00.1: cvl_0_1 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:38.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:22:38.168 00:22:38.168 --- 10.0.0.2 ping statistics --- 00:22:38.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.168 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:22:38.168 00:22:38.168 --- 10.0.0.1 ping statistics --- 00:22:38.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.168 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3398653 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3398653 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3398653 ']' 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:38.168 09:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:38.168 [2024-11-20 09:13:14.975255] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:22:38.168 [2024-11-20 09:13:14.975343] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.168 [2024-11-20 09:13:15.044073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:38.168 [2024-11-20 09:13:15.102151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.168 [2024-11-20 09:13:15.102208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.168 [2024-11-20 09:13:15.102236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.168 [2024-11-20 09:13:15.102247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.168 [2024-11-20 09:13:15.102257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.168 [2024-11-20 09:13:15.103790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.168 [2024-11-20 09:13:15.103854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.168 [2024-11-20 09:13:15.103858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.426 09:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:38.426 09:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:38.426 09:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:38.426 09:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.426 09:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:38.426 09:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.426 09:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:38.684 [2024-11-20 09:13:15.557469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.684 09:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:38.942 Malloc0 00:22:38.942 09:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.199 09:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:39.764 09:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.021 [2024-11-20 09:13:16.819472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.021 09:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:40.279 [2024-11-20 09:13:17.124257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.279 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:40.537 [2024-11-20 09:13:17.385133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3398941 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3398941 /var/tmp/bdevperf.sock 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3398941 ']' 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:40.537 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.792 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:40.792 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:40.792 09:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:41.355 NVMe0n1 00:22:41.355 09:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:41.613 00:22:41.613 09:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3399076 00:22:41.613 09:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:41.613 09:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:42.545 09:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.110 [2024-11-20 09:13:19.833364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 [2024-11-20 09:13:19.833656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1877c70 is same with the state(6) to be set 00:22:43.110 09:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:46.385 09:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:46.385 00:22:46.385 09:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:46.643 09:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:49.925 09:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.925 [2024-11-20 09:13:26.866135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.925 09:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:50.925 09:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:51.183 [2024-11-20 09:13:28.206251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879b60 is same with the state(6) to be set 00:22:51.183 [2024-11-20 09:13:28.206325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879b60 is same with the state(6) to be set 00:22:51.183 [2024-11-20 09:13:28.206343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879b60 is same with the state(6) to be set 00:22:51.183 [2024-11-20 09:13:28.206356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879b60 is same with the state(6) to be set 00:22:51.183 [2024-11-20 09:13:28.206369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879b60 is same with the state(6) to be set 00:22:51.183 [2024-11-20 09:13:28.206381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879b60 is same with the state(6) to be set 00:22:51.183 [2024-11-20 09:13:28.206393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879b60 is same with the state(6) to be set 00:22:51.183 [2024-11-20 09:13:28.206418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879b60 is same with the state(6) to be set 00:22:51.183 [2024-11-20 09:13:28.206431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879b60 is same with the state(6) to be set 00:22:51.440 09:13:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3399076 00:22:56.695 { 00:22:56.695 "results": [ 00:22:56.695 { 00:22:56.695 "job": "NVMe0n1", 00:22:56.695 "core_mask": "0x1", 00:22:56.695 "workload": "verify", 00:22:56.695 "status": "finished", 00:22:56.695 "verify_range": { 00:22:56.695 "start": 0, 00:22:56.695 "length": 16384 00:22:56.695 }, 00:22:56.695 "queue_depth": 128, 00:22:56.695 "io_size": 4096, 00:22:56.695 "runtime": 15.010593, 00:22:56.695 "iops": 8465.421719181913, 00:22:56.695 "mibps": 33.06805359055435, 00:22:56.695 "io_failed": 7045, 00:22:56.695 "io_timeout": 0, 00:22:56.695 "avg_latency_us": 14298.560135383079, 00:22:56.695 "min_latency_us": 533.997037037037, 00:22:56.695 "max_latency_us": 28350.388148148148 00:22:56.695 } 00:22:56.695 ], 00:22:56.695 "core_count": 1 00:22:56.695 } 00:22:56.695 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3398941 00:22:56.695 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3398941 ']' 00:22:56.695 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3398941 00:22:56.695 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:22:56.960 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:56.960 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3398941 00:22:56.960 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:56.960 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:56.960 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3398941' 00:22:56.960 killing process with pid 3398941 00:22:56.960 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3398941 00:22:56.960 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3398941 00:22:56.960 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:56.960 [2024-11-20 09:13:17.452758] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:22:56.960 [2024-11-20 09:13:17.452847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398941 ] 00:22:56.960 [2024-11-20 09:13:17.524045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.960 [2024-11-20 09:13:17.585558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.960 Running I/O for 15 seconds... 00:22:56.960 8492.00 IOPS, 33.17 MiB/s [2024-11-20T08:13:33.989Z] [2024-11-20 09:13:19.836448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.960 [2024-11-20 09:13:19.836770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.960 [2024-11-20 09:13:19.836798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.960 [2024-11-20 09:13:19.836835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.960 [2024-11-20 09:13:19.836863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.960 [2024-11-20 09:13:19.836891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.960 [2024-11-20 09:13:19.836918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.960 [2024-11-20 09:13:19.836946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.960 [2024-11-20 09:13:19.836961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.960 [2024-11-20 09:13:19.836974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.836988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.961 [2024-11-20 09:13:19.837017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.961 [2024-11-20 09:13:19.837920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.961 [2024-11-20 09:13:19.837934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.837948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.837961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.837975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.837989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.962 [2024-11-20 09:13:19.838880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.962 [2024-11-20 09:13:19.838928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:22:56.962 [2024-11-20 09:13:19.838941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.838959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.962 [2024-11-20 09:13:19.838971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.962 [2024-11-20 09:13:19.838982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:22:56.962 [2024-11-20 09:13:19.838995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.839008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.962 [2024-11-20 09:13:19.839019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.962 [2024-11-20 09:13:19.839034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:22:56.962 [2024-11-20 09:13:19.839047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.962 [2024-11-20 09:13:19.839068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.962 [2024-11-20 09:13:19.839079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.962 [2024-11-20 09:13:19.839090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80952 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80968 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80976 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81000 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81008 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81016 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.839961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.839974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.839985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.839995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81024 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.840007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.840020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.840030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.840041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81032 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.840053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.840066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.840076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.840087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.840099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.840111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.840122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.840132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81048 len:8 PRP1 0x0 PRP2 0x0 00:22:56.963 [2024-11-20 09:13:19.840148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.963 [2024-11-20 09:13:19.840161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.963 [2024-11-20 09:13:19.840172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.963 [2024-11-20 09:13:19.840182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81056 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81064 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81072 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81080 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81088 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81096 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81104 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81112 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81120 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81128 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81136 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81144 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81152 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81160 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81168 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81176 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81184 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.840953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.840967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.840978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.840989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81192 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.841001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.841014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.964 [2024-11-20 09:13:19.841024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.964 [2024-11-20 09:13:19.841035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81200 len:8 PRP1 0x0 PRP2 0x0 00:22:56.964 [2024-11-20 09:13:19.841048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.841119] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:56.964 [2024-11-20 09:13:19.841160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.964 [2024-11-20 09:13:19.841179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.841193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.964 [2024-11-20 09:13:19.841206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.841220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.964 [2024-11-20 09:13:19.841233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.841246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.964 [2024-11-20 09:13:19.841259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:19.841272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:56.964 [2024-11-20 09:13:19.841340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1804560 (9): Bad file descriptor 00:22:56.964 [2024-11-20 09:13:19.844537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:56.964 [2024-11-20 09:13:19.914268] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:56.964 8210.50 IOPS, 32.07 MiB/s [2024-11-20T08:13:33.993Z] 8320.00 IOPS, 32.50 MiB/s [2024-11-20T08:13:33.993Z] 8392.00 IOPS, 32.78 MiB/s [2024-11-20T08:13:33.993Z] [2024-11-20 09:13:23.587972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.964 [2024-11-20 09:13:23.588048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.964 [2024-11-20 09:13:23.588090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.588976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.588990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.965 [2024-11-20 09:13:23.589003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.965 [2024-11-20 09:13:23.589016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.966 [2024-11-20 09:13:23.589878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.589909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.589937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.589966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.589981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.589995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.590010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.590024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.590038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.590052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.590067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.590081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.590096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.590110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.590125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.590138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.966 [2024-11-20 09:13:23.590154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.966 [2024-11-20 09:13:23.590168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.590979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.590994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.967 [2024-11-20 09:13:23.591312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.967 [2024-11-20 09:13:23.591327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:23.591701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:23.591734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.968 [2024-11-20 09:13:23.591785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.968 [2024-11-20 09:13:23.591797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92824 len:8 PRP1 0x0 PRP2 0x0 00:22:56.968 [2024-11-20 09:13:23.591811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591877] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:56.968 [2024-11-20 09:13:23.591915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.968 [2024-11-20 09:13:23.591934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.968 [2024-11-20 09:13:23.591961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.591975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.968 [2024-11-20 09:13:23.591988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.592001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.968 [2024-11-20 09:13:23.592014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:23.592027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:56.968 [2024-11-20 09:13:23.595267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:56.968 [2024-11-20 09:13:23.595317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1804560 (9): Bad file descriptor 00:22:56.968 [2024-11-20 09:13:23.663820] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:56.968 8269.40 IOPS, 32.30 MiB/s [2024-11-20T08:13:33.997Z] 8330.67 IOPS, 32.54 MiB/s [2024-11-20T08:13:33.997Z] 8366.43 IOPS, 32.68 MiB/s [2024-11-20T08:13:33.997Z] 8399.50 IOPS, 32.81 MiB/s [2024-11-20T08:13:33.997Z] 8416.56 IOPS, 32.88 MiB/s [2024-11-20T08:13:33.997Z] [2024-11-20 09:13:28.209517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.968 [2024-11-20 09:13:28.209905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:28.209934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:28.209962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.209977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:28.209991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.210006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:28.210019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.210034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.968 [2024-11-20 09:13:28.210052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.968 [2024-11-20 09:13:28.210068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.210978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.210992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.211006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.211019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.969 [2024-11-20 09:13:28.211033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.969 [2024-11-20 09:13:28.211046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.970 [2024-11-20 09:13:28.211073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.970 [2024-11-20 09:13:28.211100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.970 [2024-11-20 09:13:28.211128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.970 [2024-11-20 09:13:28.211160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.970 [2024-11-20 09:13:28.211188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.970 [2024-11-20 09:13:28.211216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.970 [2024-11-20 09:13:28.211259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.970 [2024-11-20 09:13:28.211287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.970 [2024-11-20 09:13:28.211326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34488 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34496 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34504 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34512 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34520 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34528 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34536 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34544 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34552 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34560 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34568 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34576 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.211953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.211966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.211980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.211993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34584 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.212006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.212020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.212031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.212042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34592 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.212054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.212068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.212078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.212090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34600 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.212102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.212115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.212126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.212137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34608 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.212150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.212163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.212174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.212186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34616 len:8 PRP1 0x0 PRP2 0x0 00:22:56.970 [2024-11-20 09:13:28.212199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.970 [2024-11-20 09:13:28.212212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.970 [2024-11-20 09:13:28.212223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.970 [2024-11-20 09:13:28.212234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34624 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34632 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34640 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34648 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34656 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34664 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34672 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34680 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34688 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34696 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34704 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34712 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34720 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34728 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34736 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.212960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.212970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34744 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.212982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.212994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.213004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.213015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34752 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.213027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.213039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.213049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.213060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34760 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.213072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.213084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.213094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.213108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34768 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.213120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.213133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.213143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.213153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34776 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.213166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.213178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.213188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.213199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34784 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.213211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.213223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.213233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.213243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34792 len:8 PRP1 0x0 PRP2 0x0 00:22:56.971 [2024-11-20 09:13:28.213255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.971 [2024-11-20 09:13:28.213267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.971 [2024-11-20 09:13:28.213277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.971 [2024-11-20 09:13:28.213309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34800 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34808 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34816 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34824 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34832 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34840 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34848 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34856 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34864 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34872 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34880 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34888 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34896 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34904 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.213964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.213977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.213988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.213998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34912 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.214010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.214022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.214032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.214043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34920 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.214055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.214067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.214077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.214087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34928 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.214105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.214118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.214128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.214139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34936 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.214150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.214162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.214173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.214183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:8 PRP1 0x0 PRP2 0x0 00:22:56.972 [2024-11-20 09:13:28.214195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.972 [2024-11-20 09:13:28.214208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.972 [2024-11-20 09:13:28.214218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.972 [2024-11-20 09:13:28.214228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34952 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.214244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.214257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.214267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.214277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34960 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.214289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.214325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.214336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.214348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34968 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.214365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.214379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.214389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.214400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34976 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.214412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.214424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.214435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.226374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34984 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.226403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.226431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.226442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34992 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.226455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.226479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.226489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35000 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.226501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.226524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.226535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35008 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.226546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.226578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.226589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35016 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.226602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.226624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.226635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35024 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.226648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.973 [2024-11-20 09:13:28.226670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.973 [2024-11-20 09:13:28.226681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35032 len:8 PRP1 0x0 PRP2 0x0 00:22:56.973 [2024-11-20 09:13:28.226693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226767] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:56.973 [2024-11-20 09:13:28.226824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.973 [2024-11-20 09:13:28.226843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.973 [2024-11-20 09:13:28.226871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.973 [2024-11-20 09:13:28.226897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.973 [2024-11-20 09:13:28.226922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.973 [2024-11-20 09:13:28.226936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:56.973 [2024-11-20 09:13:28.226997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1804560 (9): Bad file descriptor 00:22:56.973 [2024-11-20 09:13:28.230240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:56.973 [2024-11-20 09:13:28.254078] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:56.973 8402.90 IOPS, 32.82 MiB/s [2024-11-20T08:13:34.002Z] 8417.18 IOPS, 32.88 MiB/s [2024-11-20T08:13:34.002Z] 8432.08 IOPS, 32.94 MiB/s [2024-11-20T08:13:34.002Z] 8446.46 IOPS, 32.99 MiB/s [2024-11-20T08:13:34.002Z] 8463.43 IOPS, 33.06 MiB/s 00:22:56.973 Latency(us) 00:22:56.973 [2024-11-20T08:13:34.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.973 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:56.973 Verification LBA range: start 0x0 length 0x4000 00:22:56.973 NVMe0n1 : 15.01 8465.42 33.07 469.34 0.00 14298.56 534.00 28350.39 00:22:56.973 [2024-11-20T08:13:34.002Z] =================================================================================================================== 00:22:56.973 [2024-11-20T08:13:34.002Z] Total : 8465.42 33.07 469.34 0.00 14298.56 534.00 28350.39 00:22:56.973 Received shutdown signal, test time was about 15.000000 seconds 00:22:56.973 00:22:56.973 Latency(us) 00:22:56.973 [2024-11-20T08:13:34.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.973 [2024-11-20T08:13:34.002Z] =================================================================================================================== 00:22:56.973 [2024-11-20T08:13:34.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.973 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:56.973 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:56.973 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:56.973 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3400917 00:22:56.973 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:56.974 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3400917 /var/tmp/bdevperf.sock 00:22:56.974 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3400917 ']' 00:22:56.974 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.974 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:56.974 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.974 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:56.974 09:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.231 09:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:57.231 09:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:57.231 09:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:57.488 [2024-11-20 09:13:34.497803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.745 09:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:57.745 [2024-11-20 09:13:34.766618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:58.002 09:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:58.260 NVMe0n1 00:22:58.260 09:13:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:58.824 00:22:58.824 09:13:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:59.081 00:22:59.338 09:13:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.338 09:13:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:59.595 09:13:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.852 09:13:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:03.127 09:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.127 09:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:03.127 09:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3401593 00:23:03.127 09:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.127 09:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3401593 00:23:04.498 { 00:23:04.498 "results": [ 00:23:04.498 { 00:23:04.498 "job": "NVMe0n1", 00:23:04.498 "core_mask": "0x1", 00:23:04.498 "workload": "verify", 00:23:04.498 "status": "finished", 00:23:04.498 "verify_range": { 00:23:04.498 "start": 0, 00:23:04.498 "length": 16384 00:23:04.498 }, 00:23:04.498 "queue_depth": 128, 00:23:04.498 "io_size": 4096, 00:23:04.498 "runtime": 1.021212, 00:23:04.498 "iops": 8310.713152606902, 00:23:04.498 "mibps": 32.46372325237071, 00:23:04.498 "io_failed": 0, 00:23:04.498 "io_timeout": 0, 00:23:04.498 "avg_latency_us": 15335.162026454404, 00:23:04.498 "min_latency_us": 3470.9807407407407, 00:23:04.498 "max_latency_us": 12718.838518518518 00:23:04.498 } 00:23:04.498 ], 00:23:04.498 "core_count": 1 00:23:04.498 } 00:23:04.498 09:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.498 [2024-11-20 09:13:34.013155] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:23:04.498 [2024-11-20 09:13:34.013249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400917 ] 00:23:04.498 [2024-11-20 09:13:34.082910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.498 [2024-11-20 09:13:34.140656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.498 [2024-11-20 09:13:36.639396] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:04.498 [2024-11-20 09:13:36.639483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.498 [2024-11-20 09:13:36.639507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.498 [2024-11-20 09:13:36.639524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.498 [2024-11-20 09:13:36.639538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.498 [2024-11-20 09:13:36.639552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.498 [2024-11-20 09:13:36.639566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.498 [2024-11-20 09:13:36.639579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.498 [2024-11-20 09:13:36.639593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.498 [2024-11-20 09:13:36.639607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:04.498 [2024-11-20 09:13:36.639652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:04.498 [2024-11-20 09:13:36.639684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15db560 (9): Bad file descriptor 00:23:04.498 [2024-11-20 09:13:36.650432] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:04.498 Running I/O for 1 seconds... 00:23:04.498 8232.00 IOPS, 32.16 MiB/s 00:23:04.498 Latency(us) 00:23:04.499 [2024-11-20T08:13:41.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.499 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:04.499 Verification LBA range: start 0x0 length 0x4000 00:23:04.499 NVMe0n1 : 1.02 8310.71 32.46 0.00 0.00 15335.16 3470.98 12718.84 00:23:04.499 [2024-11-20T08:13:41.528Z] =================================================================================================================== 00:23:04.499 [2024-11-20T08:13:41.528Z] Total : 8310.71 32.46 0.00 0.00 15335.16 3470.98 12718.84 00:23:04.499 09:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.499 09:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:04.499 09:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.755 09:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.755 09:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:05.012 09:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:05.269 09:13:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:08.548 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.548 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:08.548 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3400917 00:23:08.548 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3400917 ']' 00:23:08.548 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3400917 00:23:08.548 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:08.548 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:08.548 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3400917 00:23:08.548 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:08.549 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:08.549 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3400917' 00:23:08.549 killing process with pid 3400917 00:23:08.549 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3400917 00:23:08.549 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3400917 00:23:08.806 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:08.806 09:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.064 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:09.064 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:09.064 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:09.064 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:09.064 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:09.064 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:09.064 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:09.064 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:09.064 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:09.064 rmmod nvme_tcp 00:23:09.064 rmmod nvme_fabrics 00:23:09.064 rmmod nvme_keyring 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3398653 ']' 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3398653 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3398653 ']' 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3398653 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3398653 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:09.320 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:09.321 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3398653' 00:23:09.321 killing process with pid 3398653 00:23:09.321 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3398653 00:23:09.321 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3398653 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.579 09:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.481 09:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.481 00:23:11.481 real 0m36.049s 00:23:11.481 user 2m7.164s 00:23:11.481 sys 0m6.063s 00:23:11.481 09:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:11.481 09:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:11.481 ************************************ 00:23:11.481 END TEST nvmf_failover 00:23:11.481 ************************************ 00:23:11.482 09:13:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:11.482 09:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:11.482 09:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:11.482 09:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.482 ************************************ 00:23:11.482 START TEST nvmf_host_discovery 00:23:11.482 ************************************ 00:23:11.482 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:11.741 * Looking for test storage... 00:23:11.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:11.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.741 --rc genhtml_branch_coverage=1 00:23:11.741 --rc genhtml_function_coverage=1 00:23:11.741 --rc genhtml_legend=1 00:23:11.741 --rc geninfo_all_blocks=1 00:23:11.741 --rc geninfo_unexecuted_blocks=1 00:23:11.741 00:23:11.741 ' 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:11.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.741 --rc genhtml_branch_coverage=1 00:23:11.741 --rc genhtml_function_coverage=1 00:23:11.741 --rc genhtml_legend=1 00:23:11.741 --rc geninfo_all_blocks=1 00:23:11.741 --rc geninfo_unexecuted_blocks=1 00:23:11.741 00:23:11.741 ' 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:11.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.741 --rc genhtml_branch_coverage=1 00:23:11.741 --rc genhtml_function_coverage=1 00:23:11.741 --rc genhtml_legend=1 00:23:11.741 --rc geninfo_all_blocks=1 00:23:11.741 --rc geninfo_unexecuted_blocks=1 00:23:11.741 00:23:11.741 ' 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:11.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.741 --rc genhtml_branch_coverage=1 00:23:11.741 --rc genhtml_function_coverage=1 00:23:11.741 --rc genhtml_legend=1 00:23:11.741 --rc geninfo_all_blocks=1 00:23:11.741 --rc geninfo_unexecuted_blocks=1 00:23:11.741 00:23:11.741 ' 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.741 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.742 09:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.271 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:14.272 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:14.272 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:14.272 Found net devices under 0000:09:00.0: cvl_0_0 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:14.272 Found net devices under 0000:09:00.1: cvl_0_1 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.272 09:13:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:23:14.272 00:23:14.272 --- 10.0.0.2 ping statistics --- 00:23:14.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.272 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:23:14.272 00:23:14.272 --- 10.0.0.1 ping statistics --- 00:23:14.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.272 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3404326 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3404326 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3404326 ']' 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:14.272 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.272 [2024-11-20 09:13:51.128183] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:23:14.273 [2024-11-20 09:13:51.128274] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.273 [2024-11-20 09:13:51.199611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.273 [2024-11-20 09:13:51.255505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.273 [2024-11-20 09:13:51.255553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.273 [2024-11-20 09:13:51.255584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.273 [2024-11-20 09:13:51.255596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.273 [2024-11-20 09:13:51.255606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.273 [2024-11-20 09:13:51.256230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.531 [2024-11-20 09:13:51.389141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.531 [2024-11-20 09:13:51.397372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.531 null0 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.531 null1 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3404356 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3404356 /tmp/host.sock 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3404356 ']' 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:14.531 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:14.531 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.531 [2024-11-20 09:13:51.469849] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:23:14.531 [2024-11-20 09:13:51.469928] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3404356 ] 00:23:14.531 [2024-11-20 09:13:51.537672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.789 [2024-11-20 09:13:51.597388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.789 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.790 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.048 [2024-11-20 09:13:51.974890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:15.048 09:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:15.048 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:15.049 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:15.049 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:15.049 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:15.049 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.049 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:15.049 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.049 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:23:15.307 09:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:15.873 [2024-11-20 09:13:52.767969] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:15.873 [2024-11-20 09:13:52.767995] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:15.873 [2024-11-20 09:13:52.768024] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:15.873 [2024-11-20 09:13:52.856300] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:16.131 [2024-11-20 09:13:52.915017] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:16.131 [2024-11-20 09:13:52.916089] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x186bfa0:1 started. 00:23:16.131 [2024-11-20 09:13:52.917863] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:16.131 [2024-11-20 09:13:52.917886] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:16.131 [2024-11-20 09:13:52.925028] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x186bfa0 was disconnected and freed. delete nvme_qpair. 00:23:16.131 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:16.131 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:16.131 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:16.131 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.131 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.131 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.131 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.131 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.131 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:16.389 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.390 [2024-11-20 09:13:53.338604] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x186c180:1 started. 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.390 [2024-11-20 09:13:53.346213] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x186c180 was disconnected and freed. delete nvme_qpair. 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.390 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.649 [2024-11-20 09:13:53.427010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.649 [2024-11-20 09:13:53.427481] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:16.649 [2024-11-20 09:13:53.427515] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.649 [2024-11-20 09:13:53.514174] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:16.649 09:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:16.907 [2024-11-20 09:13:53.814854] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:16.907 [2024-11-20 09:13:53.814923] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:16.907 [2024-11-20 09:13:53.814939] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:16.908 [2024-11-20 09:13:53.814947] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:17.843 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.844 [2024-11-20 09:13:54.659270] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:17.844 [2024-11-20 09:13:54.659333] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.844 [2024-11-20 09:13:54.659760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.844 [2024-11-20 09:13:54.659811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.844 [2024-11-20 09:13:54.659845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.844 [2024-11-20 09:13:54.659861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.844 [2024-11-20 09:13:54.659875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.844 [2024-11-20 09:13:54.659889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.844 [2024-11-20 09:13:54.659904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.844 [2024-11-20 09:13:54.659917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.844 [2024-11-20 09:13:54.659931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183c550 is same with the state(6) to be set 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.844 [2024-11-20 09:13:54.669763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183c550 (9): Bad file descriptor 00:23:17.844 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.844 [2024-11-20 09:13:54.679805] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:17.844 [2024-11-20 09:13:54.679827] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:17.844 [2024-11-20 09:13:54.679837] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:17.844 [2024-11-20 09:13:54.679846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:17.844 [2024-11-20 09:13:54.679889] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:17.844 [2024-11-20 09:13:54.680101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.844 [2024-11-20 09:13:54.680131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183c550 with addr=10.0.0.2, port=4420 00:23:17.844 [2024-11-20 09:13:54.680148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183c550 is same with the state(6) to be set 00:23:17.844 [2024-11-20 09:13:54.680171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183c550 (9): Bad file descriptor 00:23:17.844 [2024-11-20 09:13:54.680192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:17.844 [2024-11-20 09:13:54.680207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:17.844 [2024-11-20 09:13:54.680223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:17.844 [2024-11-20 09:13:54.680236] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:17.844 [2024-11-20 09:13:54.680247] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:17.844 [2024-11-20 09:13:54.680256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:17.844 [2024-11-20 09:13:54.689921] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:17.844 [2024-11-20 09:13:54.689942] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:17.844 [2024-11-20 09:13:54.689951] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:17.844 [2024-11-20 09:13:54.689958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:17.844 [2024-11-20 09:13:54.689996] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:17.844 [2024-11-20 09:13:54.690164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.844 [2024-11-20 09:13:54.690197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183c550 with addr=10.0.0.2, port=4420 00:23:17.844 [2024-11-20 09:13:54.690214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183c550 is same with the state(6) to be set 00:23:17.844 [2024-11-20 09:13:54.690237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183c550 (9): Bad file descriptor 00:23:17.844 [2024-11-20 09:13:54.690258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:17.844 [2024-11-20 09:13:54.690272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:17.844 [2024-11-20 09:13:54.690286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:17.844 [2024-11-20 09:13:54.690311] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:17.844 [2024-11-20 09:13:54.690323] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:17.844 [2024-11-20 09:13:54.690331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:17.844 [2024-11-20 09:13:54.700044] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:17.844 [2024-11-20 09:13:54.700064] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:17.844 [2024-11-20 09:13:54.700073] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:17.844 [2024-11-20 09:13:54.700080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:17.844 [2024-11-20 09:13:54.700103] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:17.844 [2024-11-20 09:13:54.700243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.844 [2024-11-20 09:13:54.700271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183c550 with addr=10.0.0.2, port=4420 00:23:17.844 [2024-11-20 09:13:54.700293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183c550 is same with the state(6) to be set 00:23:17.844 [2024-11-20 09:13:54.700324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183c550 (9): Bad file descriptor 00:23:17.844 [2024-11-20 09:13:54.700346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:17.844 [2024-11-20 09:13:54.700361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:17.844 [2024-11-20 09:13:54.700374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:17.844 [2024-11-20 09:13:54.700387] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:17.845 [2024-11-20 09:13:54.700397] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:17.845 [2024-11-20 09:13:54.700406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.845 [2024-11-20 09:13:54.710136] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:17.845 [2024-11-20 09:13:54.710159] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:17.845 [2024-11-20 09:13:54.710169] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:17.845 [2024-11-20 09:13:54.710176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:17.845 [2024-11-20 09:13:54.710216] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:17.845 [2024-11-20 09:13:54.710420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.845 [2024-11-20 09:13:54.710449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183c550 with addr=10.0.0.2, port=4420 00:23:17.845 [2024-11-20 09:13:54.710466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183c550 is same with the state(6) to be set 00:23:17.845 [2024-11-20 09:13:54.710489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183c550 (9): Bad file descriptor 00:23:17.845 [2024-11-20 09:13:54.710510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:17.845 [2024-11-20 09:13:54.710524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:17.845 [2024-11-20 09:13:54.710538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:17.845 [2024-11-20 09:13:54.710551] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:17.845 [2024-11-20 09:13:54.710560] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:17.845 [2024-11-20 09:13:54.710568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:17.845 [2024-11-20 09:13:54.720249] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:17.845 [2024-11-20 09:13:54.720271] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:17.845 [2024-11-20 09:13:54.720296] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:17.845 [2024-11-20 09:13:54.720313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:17.845 [2024-11-20 09:13:54.720340] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:17.845 [2024-11-20 09:13:54.720464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.845 [2024-11-20 09:13:54.720492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183c550 with addr=10.0.0.2, port=4420 00:23:17.845 [2024-11-20 09:13:54.720508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183c550 is same with the state(6) to be set 00:23:17.845 [2024-11-20 09:13:54.720536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183c550 (9): Bad file descriptor 00:23:17.845 [2024-11-20 09:13:54.720557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:17.845 [2024-11-20 09:13:54.720572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:17.845 [2024-11-20 09:13:54.720585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:17.845 [2024-11-20 09:13:54.720598] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:17.845 [2024-11-20 09:13:54.720607] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:17.845 [2024-11-20 09:13:54.720615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.845 [2024-11-20 09:13:54.730375] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:17.845 [2024-11-20 09:13:54.730397] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:17.845 [2024-11-20 09:13:54.730407] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:17.845 [2024-11-20 09:13:54.730415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:17.845 [2024-11-20 09:13:54.730440] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:17.845 [2024-11-20 09:13:54.730562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.845 [2024-11-20 09:13:54.730589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183c550 with addr=10.0.0.2, port=4420 00:23:17.845 [2024-11-20 09:13:54.730605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183c550 is same with the state(6) to be set 00:23:17.845 [2024-11-20 09:13:54.730627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183c550 (9): Bad file descriptor 00:23:17.845 [2024-11-20 09:13:54.730648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:17.845 [2024-11-20 09:13:54.730662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:17.845 [2024-11-20 09:13:54.730676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:17.845 [2024-11-20 09:13:54.730689] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:17.845 [2024-11-20 09:13:54.730698] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:17.845 [2024-11-20 09:13:54.730707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:17.845 [2024-11-20 09:13:54.740476] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:17.845 [2024-11-20 09:13:54.740499] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:17.845 [2024-11-20 09:13:54.740510] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:17.845 [2024-11-20 09:13:54.740518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:17.845 [2024-11-20 09:13:54.740543] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:17.845 [2024-11-20 09:13:54.740780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.845 [2024-11-20 09:13:54.740809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x183c550 with addr=10.0.0.2, port=4420 00:23:17.845 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:17.845 [2024-11-20 09:13:54.740825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183c550 is same with the state(6) to be set 00:23:17.845 [2024-11-20 09:13:54.740848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183c550 (9): Bad file descriptor 00:23:17.845 [2024-11-20 09:13:54.740880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:17.845 [2024-11-20 09:13:54.740898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:17.845 [2024-11-20 09:13:54.740913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:17.846 [2024-11-20 09:13:54.740925] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:17.846 [2024-11-20 09:13:54.740935] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:17.846 [2024-11-20 09:13:54.740943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:17.846 [2024-11-20 09:13:54.745886] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:17.846 [2024-11-20 09:13:54.745913] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.846 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.104 09:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.036 [2024-11-20 09:13:55.996492] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:19.036 [2024-11-20 09:13:55.996517] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:19.036 [2024-11-20 09:13:55.996540] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.295 [2024-11-20 09:13:56.082838] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:19.295 [2024-11-20 09:13:56.187595] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:19.295 [2024-11-20 09:13:56.188427] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1839b10:1 started. 00:23:19.295 [2024-11-20 09:13:56.190632] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:19.295 [2024-11-20 09:13:56.190667] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.295 [2024-11-20 09:13:56.194000] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1839b10 was disconnected and freed. delete nvme_qpair. 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.295 request: 00:23:19.295 { 00:23:19.295 "name": "nvme", 00:23:19.295 "trtype": "tcp", 00:23:19.295 "traddr": "10.0.0.2", 00:23:19.295 "adrfam": "ipv4", 00:23:19.295 "trsvcid": "8009", 00:23:19.295 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:19.295 "wait_for_attach": true, 00:23:19.295 "method": "bdev_nvme_start_discovery", 00:23:19.295 "req_id": 1 00:23:19.295 } 00:23:19.295 Got JSON-RPC error response 00:23:19.295 response: 00:23:19.295 { 00:23:19.295 "code": -17, 00:23:19.295 "message": "File exists" 00:23:19.295 } 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.295 request: 00:23:19.295 { 00:23:19.295 "name": "nvme_second", 00:23:19.295 "trtype": "tcp", 00:23:19.295 "traddr": "10.0.0.2", 00:23:19.295 "adrfam": "ipv4", 00:23:19.295 "trsvcid": "8009", 00:23:19.295 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:19.295 "wait_for_attach": true, 00:23:19.295 "method": "bdev_nvme_start_discovery", 00:23:19.295 "req_id": 1 00:23:19.295 } 00:23:19.295 Got JSON-RPC error response 00:23:19.295 response: 00:23:19.295 { 00:23:19.295 "code": -17, 00:23:19.295 "message": "File exists" 00:23:19.295 } 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:19.295 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.553 09:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.485 [2024-11-20 09:13:57.398024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.485 [2024-11-20 09:13:57.398085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a0ff0 with addr=10.0.0.2, port=8010 00:23:20.485 [2024-11-20 09:13:57.398117] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:20.485 [2024-11-20 09:13:57.398132] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:20.485 [2024-11-20 09:13:57.398146] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:21.416 [2024-11-20 09:13:58.400525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.417 [2024-11-20 09:13:58.400590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a0ff0 with addr=10.0.0.2, port=8010 00:23:21.417 [2024-11-20 09:13:58.400620] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:21.417 [2024-11-20 09:13:58.400636] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:21.417 [2024-11-20 09:13:58.400649] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:22.787 [2024-11-20 09:13:59.402724] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:22.787 request: 00:23:22.787 { 00:23:22.787 "name": "nvme_second", 00:23:22.787 "trtype": "tcp", 00:23:22.787 "traddr": "10.0.0.2", 00:23:22.787 "adrfam": "ipv4", 00:23:22.787 "trsvcid": "8010", 00:23:22.787 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:22.787 "wait_for_attach": false, 00:23:22.787 "attach_timeout_ms": 3000, 00:23:22.787 "method": "bdev_nvme_start_discovery", 00:23:22.787 "req_id": 1 00:23:22.787 } 00:23:22.787 Got JSON-RPC error response 00:23:22.787 response: 00:23:22.787 { 00:23:22.787 "code": -110, 00:23:22.787 "message": "Connection timed out" 00:23:22.787 } 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:22.787 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3404356 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.788 rmmod nvme_tcp 00:23:22.788 rmmod nvme_fabrics 00:23:22.788 rmmod nvme_keyring 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3404326 ']' 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3404326 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 3404326 ']' 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 3404326 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3404326 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3404326' 00:23:22.788 killing process with pid 3404326 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 3404326 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 3404326 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.788 09:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.326 00:23:25.326 real 0m13.344s 00:23:25.326 user 0m18.905s 00:23:25.326 sys 0m2.956s 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.326 ************************************ 00:23:25.326 END TEST nvmf_host_discovery 00:23:25.326 ************************************ 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.326 ************************************ 00:23:25.326 START TEST nvmf_host_multipath_status 00:23:25.326 ************************************ 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:25.326 * Looking for test storage... 00:23:25.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:23:25.326 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:25.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.326 --rc genhtml_branch_coverage=1 00:23:25.326 --rc genhtml_function_coverage=1 00:23:25.326 --rc genhtml_legend=1 00:23:25.326 --rc geninfo_all_blocks=1 00:23:25.326 --rc geninfo_unexecuted_blocks=1 00:23:25.326 00:23:25.326 ' 00:23:25.326 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:25.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.326 --rc genhtml_branch_coverage=1 00:23:25.327 --rc genhtml_function_coverage=1 00:23:25.327 --rc genhtml_legend=1 00:23:25.327 --rc geninfo_all_blocks=1 00:23:25.327 --rc geninfo_unexecuted_blocks=1 00:23:25.327 00:23:25.327 ' 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:25.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.327 --rc genhtml_branch_coverage=1 00:23:25.327 --rc genhtml_function_coverage=1 00:23:25.327 --rc genhtml_legend=1 00:23:25.327 --rc geninfo_all_blocks=1 00:23:25.327 --rc geninfo_unexecuted_blocks=1 00:23:25.327 00:23:25.327 ' 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:25.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.327 --rc genhtml_branch_coverage=1 00:23:25.327 --rc genhtml_function_coverage=1 00:23:25.327 --rc genhtml_legend=1 00:23:25.327 --rc geninfo_all_blocks=1 00:23:25.327 --rc geninfo_unexecuted_blocks=1 00:23:25.327 00:23:25.327 ' 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:25.327 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:27.288 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:27.288 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:27.288 Found net devices under 0000:09:00.0: cvl_0_0 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.288 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:27.289 Found net devices under 0000:09:00.1: cvl_0_1 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:27.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:23:27.289 00:23:27.289 --- 10.0.0.2 ping statistics --- 00:23:27.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.289 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:23:27.289 00:23:27.289 --- 10.0.0.1 ping statistics --- 00:23:27.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.289 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3407394 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3407394 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3407394 ']' 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:27.289 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:27.547 [2024-11-20 09:14:04.325717] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:23:27.547 [2024-11-20 09:14:04.325792] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.547 [2024-11-20 09:14:04.399647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:27.547 [2024-11-20 09:14:04.457858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.547 [2024-11-20 09:14:04.457911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.547 [2024-11-20 09:14:04.457940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.547 [2024-11-20 09:14:04.457951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.547 [2024-11-20 09:14:04.457961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.547 [2024-11-20 09:14:04.459518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.547 [2024-11-20 09:14:04.459525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.805 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:27.805 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:27.805 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.805 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.805 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:27.805 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.805 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3407394 00:23:27.805 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:28.063 [2024-11-20 09:14:04.915245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.063 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:28.321 Malloc0 00:23:28.321 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:28.578 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:28.836 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.093 [2024-11-20 09:14:06.053359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.093 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:29.350 [2024-11-20 09:14:06.322092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3407680 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3407680 /var/tmp/bdevperf.sock 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3407680 ']' 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:29.350 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.608 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:29.608 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:29.608 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:30.172 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:30.430 Nvme0n1 00:23:30.430 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:30.995 Nvme0n1 00:23:30.995 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:30.995 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:32.899 09:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:32.899 09:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:33.156 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:33.413 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:34.345 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:34.345 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:34.345 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.345 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:34.603 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.603 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:34.603 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.603 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:34.861 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:34.861 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:34.861 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.861 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:35.119 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.119 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:35.119 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.119 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.684 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.684 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.684 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.684 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:35.684 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.684 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:35.684 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.684 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:35.941 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.941 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:35.941 09:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:36.506 09:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:36.507 09:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:37.878 09:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:37.878 09:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:37.878 09:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.878 09:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.878 09:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.879 09:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:37.879 09:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.879 09:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:38.135 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.135 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:38.135 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.135 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:38.392 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.392 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:38.392 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.392 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:38.649 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.649 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:38.649 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.649 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.907 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.907 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:38.907 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.907 09:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:39.164 09:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.164 09:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:39.164 09:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:39.420 09:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:39.985 09:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:40.916 09:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:40.916 09:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:40.916 09:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.916 09:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:41.174 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.174 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:41.174 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.174 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:41.432 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.432 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:41.432 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.432 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:41.689 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.689 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:41.689 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.689 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.947 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.947 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:41.947 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.947 09:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:42.204 09:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.204 09:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:42.204 09:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.204 09:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:42.461 09:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.461 09:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:42.461 09:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:42.718 09:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:42.975 09:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:44.349 09:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:44.349 09:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.349 09:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.349 09:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.349 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.349 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.349 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.349 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.606 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.606 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.606 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.606 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.864 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.864 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.864 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.864 09:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:45.121 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.121 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.122 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.122 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.379 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.379 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:45.379 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.379 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.638 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.638 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:45.638 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:45.897 09:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:46.155 09:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:47.527 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:47.527 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:47.527 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.527 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.527 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.527 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:47.527 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.527 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:47.784 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.784 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:47.784 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.784 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.042 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.042 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.042 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.042 09:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.299 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.300 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:48.300 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.300 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.556 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.556 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:48.556 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.556 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:48.814 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.814 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:48.814 09:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:49.071 09:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.328 09:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:50.700 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:50.700 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:50.700 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.700 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.700 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.700 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:50.700 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.700 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.957 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.957 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.957 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.957 09:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:51.215 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.215 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.215 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.215 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.472 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.472 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:51.472 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.472 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.729 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.729 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.729 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.729 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.986 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.986 09:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:52.551 09:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:52.551 09:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:52.551 09:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:52.825 09:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:54.196 09:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:54.196 09:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:54.196 09:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.196 09:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.196 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.196 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:54.196 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.196 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.454 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.454 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.454 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.454 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.712 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.712 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.712 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.712 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.969 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.969 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:54.969 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.969 09:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.227 09:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.227 09:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:55.227 09:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.227 09:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.484 09:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.484 09:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:55.484 09:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:55.741 09:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:56.306 09:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:57.240 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:57.240 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:57.240 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.240 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.498 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.498 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:57.498 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.498 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.792 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.792 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.792 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.792 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.075 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.075 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.075 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.075 09:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.333 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.333 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:58.333 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.333 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.590 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.590 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:58.590 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.590 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.848 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.848 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:58.848 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:59.106 09:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:59.365 09:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:00.298 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:00.298 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:00.298 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.298 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.556 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.556 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:00.556 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.556 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.813 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.813 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.813 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.813 09:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.071 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.071 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.071 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.071 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.328 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.328 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.328 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.328 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.895 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.895 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.895 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.895 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.895 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.895 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:01.895 09:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.152 09:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:02.717 09:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:03.650 09:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:03.650 09:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.650 09:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.650 09:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.908 09:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.908 09:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.908 09:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.908 09:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.165 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.165 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.165 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.165 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.424 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.424 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.424 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.424 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.681 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.681 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.681 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.681 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.939 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.939 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:04.939 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.939 09:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3407680 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3407680 ']' 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3407680 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3407680 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3407680' 00:24:05.196 killing process with pid 3407680 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3407680 00:24:05.196 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3407680 00:24:05.196 { 00:24:05.196 "results": [ 00:24:05.196 { 00:24:05.196 "job": "Nvme0n1", 00:24:05.196 "core_mask": "0x4", 00:24:05.196 "workload": "verify", 00:24:05.196 "status": "terminated", 00:24:05.196 "verify_range": { 00:24:05.196 "start": 0, 00:24:05.196 "length": 16384 00:24:05.196 }, 00:24:05.196 "queue_depth": 128, 00:24:05.196 "io_size": 4096, 00:24:05.196 "runtime": 34.203582, 00:24:05.196 "iops": 7994.7766874241415, 00:24:05.196 "mibps": 31.229596435250553, 00:24:05.196 "io_failed": 0, 00:24:05.196 "io_timeout": 0, 00:24:05.196 "avg_latency_us": 15983.803815509642, 00:24:05.196 "min_latency_us": 364.0888888888889, 00:24:05.196 "max_latency_us": 4026531.84 00:24:05.196 } 00:24:05.196 ], 00:24:05.196 "core_count": 1 00:24:05.196 } 00:24:05.456 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3407680 00:24:05.456 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.456 [2024-11-20 09:14:06.388707] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:24:05.456 [2024-11-20 09:14:06.388794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407680 ] 00:24:05.456 [2024-11-20 09:14:06.455481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.456 [2024-11-20 09:14:06.514089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.456 Running I/O for 90 seconds... 00:24:05.456 8647.00 IOPS, 33.78 MiB/s [2024-11-20T08:14:42.485Z] 8646.50 IOPS, 33.78 MiB/s [2024-11-20T08:14:42.485Z] 8595.00 IOPS, 33.57 MiB/s [2024-11-20T08:14:42.485Z] 8605.75 IOPS, 33.62 MiB/s [2024-11-20T08:14:42.486Z] 8610.40 IOPS, 33.63 MiB/s [2024-11-20T08:14:42.486Z] 8554.67 IOPS, 33.42 MiB/s [2024-11-20T08:14:42.486Z] 8511.14 IOPS, 33.25 MiB/s [2024-11-20T08:14:42.486Z] 8462.75 IOPS, 33.06 MiB/s [2024-11-20T08:14:42.486Z] 8436.22 IOPS, 32.95 MiB/s [2024-11-20T08:14:42.486Z] 8451.20 IOPS, 33.01 MiB/s [2024-11-20T08:14:42.486Z] 8461.64 IOPS, 33.05 MiB/s [2024-11-20T08:14:42.486Z] 8468.67 IOPS, 33.08 MiB/s [2024-11-20T08:14:42.486Z] 8478.38 IOPS, 33.12 MiB/s [2024-11-20T08:14:42.486Z] 8490.64 IOPS, 33.17 MiB/s [2024-11-20T08:14:42.486Z] [2024-11-20 09:14:22.866531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.866589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.866659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.866682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.866707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.866724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.866748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.866765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.866788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.866822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.866845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.866875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.866899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.866914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.866936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.866951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.866972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.866988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.457 [2024-11-20 09:14:22.867886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.457 [2024-11-20 09:14:22.867903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.867924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.867939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.867960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.867975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.868974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.868997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.458 [2024-11-20 09:14:22.869604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.458 [2024-11-20 09:14:22.869635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.869653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.869680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.869698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.869724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.869740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.869767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.869784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.869810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.869827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.869869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.869891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.869919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.869935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.869962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.869978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.870964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.870995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.459 [2024-11-20 09:14:22.871012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.459 [2024-11-20 09:14:22.871037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.871700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.871741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.871782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.871824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.871864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.871905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.871946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.871971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.871987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.872028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.872074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.872116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.872157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.872197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.872238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.872278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.872345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.460 [2024-11-20 09:14:22.872389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.460 [2024-11-20 09:14:22.872415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.460 [2024-11-20 09:14:22.872432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.460 8469.53 IOPS, 33.08 MiB/s [2024-11-20T08:14:42.489Z] 7940.19 IOPS, 31.02 MiB/s [2024-11-20T08:14:42.489Z] 7473.12 IOPS, 29.19 MiB/s [2024-11-20T08:14:42.490Z] 7057.94 IOPS, 27.57 MiB/s [2024-11-20T08:14:42.490Z] 6704.11 IOPS, 26.19 MiB/s [2024-11-20T08:14:42.490Z] 6775.15 IOPS, 26.47 MiB/s [2024-11-20T08:14:42.490Z] 6843.19 IOPS, 26.73 MiB/s [2024-11-20T08:14:42.490Z] 6956.55 IOPS, 27.17 MiB/s [2024-11-20T08:14:42.490Z] 7163.30 IOPS, 27.98 MiB/s [2024-11-20T08:14:42.490Z] 7346.04 IOPS, 28.70 MiB/s [2024-11-20T08:14:42.490Z] 7489.16 IOPS, 29.25 MiB/s [2024-11-20T08:14:42.490Z] 7511.50 IOPS, 29.34 MiB/s [2024-11-20T08:14:42.490Z] 7537.56 IOPS, 29.44 MiB/s [2024-11-20T08:14:42.490Z] 7560.18 IOPS, 29.53 MiB/s [2024-11-20T08:14:42.490Z] 7665.03 IOPS, 29.94 MiB/s [2024-11-20T08:14:42.490Z] 7791.40 IOPS, 30.44 MiB/s [2024-11-20T08:14:42.490Z] 7910.71 IOPS, 30.90 MiB/s [2024-11-20T08:14:42.490Z] [2024-11-20 09:14:39.433649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.433713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.433777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.433803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.434548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.434974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.434989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.435009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.435028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.435050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.461 [2024-11-20 09:14:39.435066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.461 [2024-11-20 09:14:39.436145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.461 [2024-11-20 09:14:39.436170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.462 [2024-11-20 09:14:39.436215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.462 [2024-11-20 09:14:39.436254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.462 [2024-11-20 09:14:39.436293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.462 [2024-11-20 09:14:39.436342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.462 [2024-11-20 09:14:39.436381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.462 [2024-11-20 09:14:39.436420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.462 [2024-11-20 09:14:39.436458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.462 [2024-11-20 09:14:39.436498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.436957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.436988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.437011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.437041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.437065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.437081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.437107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.437129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.437152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.437169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.462 [2024-11-20 09:14:39.437191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.462 [2024-11-20 09:14:39.437208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.462 7971.25 IOPS, 31.14 MiB/s [2024-11-20T08:14:42.491Z] 7978.12 IOPS, 31.16 MiB/s [2024-11-20T08:14:42.491Z] 7993.15 IOPS, 31.22 MiB/s [2024-11-20T08:14:42.491Z] Received shutdown signal, test time was about 34.204370 seconds 00:24:05.462 00:24:05.462 Latency(us) 00:24:05.462 [2024-11-20T08:14:42.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.462 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.462 Verification LBA range: start 0x0 length 0x4000 00:24:05.462 Nvme0n1 : 34.20 7994.78 31.23 0.00 0.00 15983.80 364.09 4026531.84 00:24:05.462 [2024-11-20T08:14:42.491Z] =================================================================================================================== 00:24:05.462 [2024-11-20T08:14:42.491Z] Total : 7994.78 31.23 0.00 0.00 15983.80 364.09 4026531.84 00:24:05.462 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.720 rmmod nvme_tcp 00:24:05.720 rmmod nvme_fabrics 00:24:05.720 rmmod nvme_keyring 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3407394 ']' 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3407394 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3407394 ']' 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3407394 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3407394 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:05.720 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:05.977 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3407394' 00:24:05.977 killing process with pid 3407394 00:24:05.977 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3407394 00:24:05.978 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3407394 00:24:05.978 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:05.978 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:05.978 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:05.978 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:05.978 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:05.978 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:05.978 09:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:05.978 09:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:05.978 09:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:05.978 09:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.978 09:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.978 09:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.512 00:24:08.512 real 0m43.170s 00:24:08.512 user 2m9.093s 00:24:08.512 sys 0m11.823s 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:08.512 ************************************ 00:24:08.512 END TEST nvmf_host_multipath_status 00:24:08.512 ************************************ 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.512 ************************************ 00:24:08.512 START TEST nvmf_discovery_remove_ifc 00:24:08.512 ************************************ 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:08.512 * Looking for test storage... 00:24:08.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:08.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.512 --rc genhtml_branch_coverage=1 00:24:08.512 --rc genhtml_function_coverage=1 00:24:08.512 --rc genhtml_legend=1 00:24:08.512 --rc geninfo_all_blocks=1 00:24:08.512 --rc geninfo_unexecuted_blocks=1 00:24:08.512 00:24:08.512 ' 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:08.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.512 --rc genhtml_branch_coverage=1 00:24:08.512 --rc genhtml_function_coverage=1 00:24:08.512 --rc genhtml_legend=1 00:24:08.512 --rc geninfo_all_blocks=1 00:24:08.512 --rc geninfo_unexecuted_blocks=1 00:24:08.512 00:24:08.512 ' 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:08.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.512 --rc genhtml_branch_coverage=1 00:24:08.512 --rc genhtml_function_coverage=1 00:24:08.512 --rc genhtml_legend=1 00:24:08.512 --rc geninfo_all_blocks=1 00:24:08.512 --rc geninfo_unexecuted_blocks=1 00:24:08.512 00:24:08.512 ' 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:08.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.512 --rc genhtml_branch_coverage=1 00:24:08.512 --rc genhtml_function_coverage=1 00:24:08.512 --rc genhtml_legend=1 00:24:08.512 --rc geninfo_all_blocks=1 00:24:08.512 --rc geninfo_unexecuted_blocks=1 00:24:08.512 00:24:08.512 ' 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.512 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.513 09:14:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:10.417 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:10.417 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:10.417 Found net devices under 0000:09:00.0: cvl_0_0 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:10.417 Found net devices under 0000:09:00.1: cvl_0_1 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.417 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:24:10.675 00:24:10.675 --- 10.0.0.2 ping statistics --- 00:24:10.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.675 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:10.675 00:24:10.675 --- 10.0.0.1 ping statistics --- 00:24:10.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.675 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3414163 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3414163 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3414163 ']' 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:10.675 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.675 [2024-11-20 09:14:47.584679] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:24:10.676 [2024-11-20 09:14:47.584758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.676 [2024-11-20 09:14:47.654408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.933 [2024-11-20 09:14:47.711705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.933 [2024-11-20 09:14:47.711751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.933 [2024-11-20 09:14:47.711780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.933 [2024-11-20 09:14:47.711791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.933 [2024-11-20 09:14:47.711801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.933 [2024-11-20 09:14:47.712444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.933 [2024-11-20 09:14:47.867006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.933 [2024-11-20 09:14:47.875202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:10.933 null0 00:24:10.933 [2024-11-20 09:14:47.907136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3414182 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3414182 /tmp/host.sock 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3414182 ']' 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:10.933 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:10.933 09:14:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.191 [2024-11-20 09:14:47.978974] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:24:11.191 [2024-11-20 09:14:47.979056] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414182 ] 00:24:11.191 [2024-11-20 09:14:48.045377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.191 [2024-11-20 09:14:48.104176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.191 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.449 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.449 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:11.449 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.449 09:14:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.399 [2024-11-20 09:14:49.365433] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:12.399 [2024-11-20 09:14:49.365469] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:12.399 [2024-11-20 09:14:49.365501] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.657 [2024-11-20 09:14:49.451790] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:12.657 [2024-11-20 09:14:49.513539] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:12.657 [2024-11-20 09:14:49.514508] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6b0c00:1 started. 00:24:12.657 [2024-11-20 09:14:49.516212] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:12.657 [2024-11-20 09:14:49.516267] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:12.657 [2024-11-20 09:14:49.516330] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:12.657 [2024-11-20 09:14:49.516355] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:12.657 [2024-11-20 09:14:49.516391] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.657 [2024-11-20 09:14:49.523095] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6b0c00 was disconnected and freed. delete nvme_qpair. 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.657 09:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.030 09:14:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.963 09:14:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:15.897 09:14:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:16.829 09:14:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:18.201 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.201 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.201 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.202 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.202 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.202 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.202 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.202 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.202 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:18.202 09:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:18.202 [2024-11-20 09:14:54.957607] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:18.202 [2024-11-20 09:14:54.957695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.202 [2024-11-20 09:14:54.957720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.202 [2024-11-20 09:14:54.957739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.202 [2024-11-20 09:14:54.957752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.202 [2024-11-20 09:14:54.957766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.202 [2024-11-20 09:14:54.957779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.202 [2024-11-20 09:14:54.957792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.202 [2024-11-20 09:14:54.957805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.202 [2024-11-20 09:14:54.957818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:18.202 [2024-11-20 09:14:54.957831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:18.202 [2024-11-20 09:14:54.957843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d400 is same with the state(6) to be set 00:24:18.202 [2024-11-20 09:14:54.967625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d400 (9): Bad file descriptor 00:24:18.202 [2024-11-20 09:14:54.977671] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:18.202 [2024-11-20 09:14:54.977693] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:18.202 [2024-11-20 09:14:54.977703] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:18.202 [2024-11-20 09:14:54.977713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:18.202 [2024-11-20 09:14:54.977771] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:19.135 09:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.135 09:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.135 09:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.135 09:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.135 09:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.135 09:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.135 09:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.135 [2024-11-20 09:14:56.022345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:19.135 [2024-11-20 09:14:56.022426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68d400 with addr=10.0.0.2, port=4420 00:24:19.135 [2024-11-20 09:14:56.022455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d400 is same with the state(6) to be set 00:24:19.135 [2024-11-20 09:14:56.022514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d400 (9): Bad file descriptor 00:24:19.135 [2024-11-20 09:14:56.023003] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:19.135 [2024-11-20 09:14:56.023049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:19.135 [2024-11-20 09:14:56.023067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:19.135 [2024-11-20 09:14:56.023084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:19.135 [2024-11-20 09:14:56.023099] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:19.135 [2024-11-20 09:14:56.023111] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:19.135 [2024-11-20 09:14:56.023119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:19.135 [2024-11-20 09:14:56.023132] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:19.135 [2024-11-20 09:14:56.023142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:19.135 09:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.136 09:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:19.136 09:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.070 [2024-11-20 09:14:57.025642] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.070 [2024-11-20 09:14:57.025692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.070 [2024-11-20 09:14:57.025720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.070 [2024-11-20 09:14:57.025751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.070 [2024-11-20 09:14:57.025766] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:20.070 [2024-11-20 09:14:57.025780] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.070 [2024-11-20 09:14:57.025792] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.070 [2024-11-20 09:14:57.025800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.070 [2024-11-20 09:14:57.025857] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:20.070 [2024-11-20 09:14:57.025931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.070 [2024-11-20 09:14:57.025956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-11-20 09:14:57.025976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.070 [2024-11-20 09:14:57.025989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-11-20 09:14:57.026002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.070 [2024-11-20 09:14:57.026014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-11-20 09:14:57.026027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.070 [2024-11-20 09:14:57.026040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-11-20 09:14:57.026053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.070 [2024-11-20 09:14:57.026065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.070 [2024-11-20 09:14:57.026078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:20.070 [2024-11-20 09:14:57.026135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67cb40 (9): Bad file descriptor 00:24:20.070 [2024-11-20 09:14:57.027123] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:20.070 [2024-11-20 09:14:57.027145] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.070 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:20.328 09:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:21.270 09:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:22.201 [2024-11-20 09:14:59.079993] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:22.201 [2024-11-20 09:14:59.080030] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:22.201 [2024-11-20 09:14:59.080055] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:22.201 [2024-11-20 09:14:59.206477] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:22.201 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:22.201 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.201 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:22.201 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.201 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:22.201 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.201 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:22.201 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.458 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:22.458 09:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:22.458 [2024-11-20 09:14:59.381782] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:22.458 [2024-11-20 09:14:59.382683] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x697a40:1 started. 00:24:22.458 [2024-11-20 09:14:59.384032] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:22.458 [2024-11-20 09:14:59.384077] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:22.458 [2024-11-20 09:14:59.384109] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:22.458 [2024-11-20 09:14:59.384131] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:22.458 [2024-11-20 09:14:59.384144] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:22.458 [2024-11-20 09:14:59.388832] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x697a40 was disconnected and freed. delete nvme_qpair. 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3414182 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3414182 ']' 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3414182 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3414182 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3414182' 00:24:23.391 killing process with pid 3414182 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3414182 00:24:23.391 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3414182 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.649 rmmod nvme_tcp 00:24:23.649 rmmod nvme_fabrics 00:24:23.649 rmmod nvme_keyring 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3414163 ']' 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3414163 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3414163 ']' 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3414163 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3414163 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3414163' 00:24:23.649 killing process with pid 3414163 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3414163 00:24:23.649 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3414163 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.906 09:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.438 00:24:26.438 real 0m17.759s 00:24:26.438 user 0m25.648s 00:24:26.438 sys 0m3.057s 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.438 ************************************ 00:24:26.438 END TEST nvmf_discovery_remove_ifc 00:24:26.438 ************************************ 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.438 ************************************ 00:24:26.438 START TEST nvmf_identify_kernel_target 00:24:26.438 ************************************ 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:26.438 * Looking for test storage... 00:24:26.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:24:26.438 09:15:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:26.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.438 --rc genhtml_branch_coverage=1 00:24:26.438 --rc genhtml_function_coverage=1 00:24:26.438 --rc genhtml_legend=1 00:24:26.438 --rc geninfo_all_blocks=1 00:24:26.438 --rc geninfo_unexecuted_blocks=1 00:24:26.438 00:24:26.438 ' 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:26.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.438 --rc genhtml_branch_coverage=1 00:24:26.438 --rc genhtml_function_coverage=1 00:24:26.438 --rc genhtml_legend=1 00:24:26.438 --rc geninfo_all_blocks=1 00:24:26.438 --rc geninfo_unexecuted_blocks=1 00:24:26.438 00:24:26.438 ' 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:26.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.438 --rc genhtml_branch_coverage=1 00:24:26.438 --rc genhtml_function_coverage=1 00:24:26.438 --rc genhtml_legend=1 00:24:26.438 --rc geninfo_all_blocks=1 00:24:26.438 --rc geninfo_unexecuted_blocks=1 00:24:26.438 00:24:26.438 ' 00:24:26.438 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:26.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.439 --rc genhtml_branch_coverage=1 00:24:26.439 --rc genhtml_function_coverage=1 00:24:26.439 --rc genhtml_legend=1 00:24:26.439 --rc geninfo_all_blocks=1 00:24:26.439 --rc geninfo_unexecuted_blocks=1 00:24:26.439 00:24:26.439 ' 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.439 09:15:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:28.339 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:28.339 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:28.339 Found net devices under 0000:09:00.0: cvl_0_0 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.339 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:28.339 Found net devices under 0000:09:00.1: cvl_0_1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:24:28.340 00:24:28.340 --- 10.0.0.2 ping statistics --- 00:24:28.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.340 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:24:28.340 00:24:28.340 --- 10.0.0.1 ping statistics --- 00:24:28.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.340 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:28.340 09:15:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:29.716 Waiting for block devices as requested 00:24:29.716 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:29.716 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:29.975 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:29.975 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:29.975 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:29.975 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:30.233 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:30.233 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:30.233 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:24:30.491 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:30.491 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:30.491 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:30.491 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:30.770 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:30.770 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:30.770 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:31.064 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:31.064 No valid GPT data, bailing 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:31.064 09:15:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:24:31.064 00:24:31.064 Discovery Log Number of Records 2, Generation counter 2 00:24:31.064 =====Discovery Log Entry 0====== 00:24:31.064 trtype: tcp 00:24:31.064 adrfam: ipv4 00:24:31.064 subtype: current discovery subsystem 00:24:31.064 treq: not specified, sq flow control disable supported 00:24:31.064 portid: 1 00:24:31.064 trsvcid: 4420 00:24:31.064 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:31.064 traddr: 10.0.0.1 00:24:31.064 eflags: none 00:24:31.064 sectype: none 00:24:31.064 =====Discovery Log Entry 1====== 00:24:31.064 trtype: tcp 00:24:31.064 adrfam: ipv4 00:24:31.064 subtype: nvme subsystem 00:24:31.064 treq: not specified, sq flow control disable supported 00:24:31.064 portid: 1 00:24:31.064 trsvcid: 4420 00:24:31.064 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:31.064 traddr: 10.0.0.1 00:24:31.064 eflags: none 00:24:31.064 sectype: none 00:24:31.064 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:31.064 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:31.328 ===================================================== 00:24:31.328 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:31.328 ===================================================== 00:24:31.328 Controller Capabilities/Features 00:24:31.328 ================================ 00:24:31.328 Vendor ID: 0000 00:24:31.328 Subsystem Vendor ID: 0000 00:24:31.328 Serial Number: 6b7b8348c7183c66429f 00:24:31.328 Model Number: Linux 00:24:31.328 Firmware Version: 6.8.9-20 00:24:31.328 Recommended Arb Burst: 0 00:24:31.328 IEEE OUI Identifier: 00 00 00 00:24:31.328 Multi-path I/O 00:24:31.328 May have multiple subsystem ports: No 00:24:31.328 May have multiple controllers: No 00:24:31.328 Associated with SR-IOV VF: No 00:24:31.328 Max Data Transfer Size: Unlimited 00:24:31.328 Max Number of Namespaces: 0 00:24:31.328 Max Number of I/O Queues: 1024 00:24:31.328 NVMe Specification Version (VS): 1.3 00:24:31.328 NVMe Specification Version (Identify): 1.3 00:24:31.328 Maximum Queue Entries: 1024 00:24:31.328 Contiguous Queues Required: No 00:24:31.328 Arbitration Mechanisms Supported 00:24:31.328 Weighted Round Robin: Not Supported 00:24:31.328 Vendor Specific: Not Supported 00:24:31.328 Reset Timeout: 7500 ms 00:24:31.328 Doorbell Stride: 4 bytes 00:24:31.328 NVM Subsystem Reset: Not Supported 00:24:31.328 Command Sets Supported 00:24:31.328 NVM Command Set: Supported 00:24:31.328 Boot Partition: Not Supported 00:24:31.328 Memory Page Size Minimum: 4096 bytes 00:24:31.328 Memory Page Size Maximum: 4096 bytes 00:24:31.328 Persistent Memory Region: Not Supported 00:24:31.328 Optional Asynchronous Events Supported 00:24:31.328 Namespace Attribute Notices: Not Supported 00:24:31.328 Firmware Activation Notices: Not Supported 00:24:31.328 ANA Change Notices: Not Supported 00:24:31.328 PLE Aggregate Log Change Notices: Not Supported 00:24:31.328 LBA Status Info Alert Notices: Not Supported 00:24:31.328 EGE Aggregate Log Change Notices: Not Supported 00:24:31.328 Normal NVM Subsystem Shutdown event: Not Supported 00:24:31.328 Zone Descriptor Change Notices: Not Supported 00:24:31.328 Discovery Log Change Notices: Supported 00:24:31.328 Controller Attributes 00:24:31.328 128-bit Host Identifier: Not Supported 00:24:31.328 Non-Operational Permissive Mode: Not Supported 00:24:31.328 NVM Sets: Not Supported 00:24:31.328 Read Recovery Levels: Not Supported 00:24:31.328 Endurance Groups: Not Supported 00:24:31.328 Predictable Latency Mode: Not Supported 00:24:31.328 Traffic Based Keep ALive: Not Supported 00:24:31.328 Namespace Granularity: Not Supported 00:24:31.328 SQ Associations: Not Supported 00:24:31.328 UUID List: Not Supported 00:24:31.328 Multi-Domain Subsystem: Not Supported 00:24:31.328 Fixed Capacity Management: Not Supported 00:24:31.328 Variable Capacity Management: Not Supported 00:24:31.328 Delete Endurance Group: Not Supported 00:24:31.328 Delete NVM Set: Not Supported 00:24:31.328 Extended LBA Formats Supported: Not Supported 00:24:31.328 Flexible Data Placement Supported: Not Supported 00:24:31.328 00:24:31.328 Controller Memory Buffer Support 00:24:31.328 ================================ 00:24:31.328 Supported: No 00:24:31.328 00:24:31.328 Persistent Memory Region Support 00:24:31.328 ================================ 00:24:31.328 Supported: No 00:24:31.328 00:24:31.328 Admin Command Set Attributes 00:24:31.328 ============================ 00:24:31.328 Security Send/Receive: Not Supported 00:24:31.328 Format NVM: Not Supported 00:24:31.328 Firmware Activate/Download: Not Supported 00:24:31.328 Namespace Management: Not Supported 00:24:31.328 Device Self-Test: Not Supported 00:24:31.328 Directives: Not Supported 00:24:31.328 NVMe-MI: Not Supported 00:24:31.328 Virtualization Management: Not Supported 00:24:31.328 Doorbell Buffer Config: Not Supported 00:24:31.328 Get LBA Status Capability: Not Supported 00:24:31.328 Command & Feature Lockdown Capability: Not Supported 00:24:31.328 Abort Command Limit: 1 00:24:31.328 Async Event Request Limit: 1 00:24:31.328 Number of Firmware Slots: N/A 00:24:31.328 Firmware Slot 1 Read-Only: N/A 00:24:31.328 Firmware Activation Without Reset: N/A 00:24:31.328 Multiple Update Detection Support: N/A 00:24:31.328 Firmware Update Granularity: No Information Provided 00:24:31.328 Per-Namespace SMART Log: No 00:24:31.328 Asymmetric Namespace Access Log Page: Not Supported 00:24:31.328 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:31.328 Command Effects Log Page: Not Supported 00:24:31.328 Get Log Page Extended Data: Supported 00:24:31.328 Telemetry Log Pages: Not Supported 00:24:31.328 Persistent Event Log Pages: Not Supported 00:24:31.328 Supported Log Pages Log Page: May Support 00:24:31.328 Commands Supported & Effects Log Page: Not Supported 00:24:31.328 Feature Identifiers & Effects Log Page:May Support 00:24:31.328 NVMe-MI Commands & Effects Log Page: May Support 00:24:31.328 Data Area 4 for Telemetry Log: Not Supported 00:24:31.328 Error Log Page Entries Supported: 1 00:24:31.328 Keep Alive: Not Supported 00:24:31.328 00:24:31.328 NVM Command Set Attributes 00:24:31.328 ========================== 00:24:31.328 Submission Queue Entry Size 00:24:31.328 Max: 1 00:24:31.328 Min: 1 00:24:31.328 Completion Queue Entry Size 00:24:31.328 Max: 1 00:24:31.328 Min: 1 00:24:31.328 Number of Namespaces: 0 00:24:31.328 Compare Command: Not Supported 00:24:31.328 Write Uncorrectable Command: Not Supported 00:24:31.328 Dataset Management Command: Not Supported 00:24:31.328 Write Zeroes Command: Not Supported 00:24:31.328 Set Features Save Field: Not Supported 00:24:31.328 Reservations: Not Supported 00:24:31.328 Timestamp: Not Supported 00:24:31.328 Copy: Not Supported 00:24:31.328 Volatile Write Cache: Not Present 00:24:31.328 Atomic Write Unit (Normal): 1 00:24:31.328 Atomic Write Unit (PFail): 1 00:24:31.328 Atomic Compare & Write Unit: 1 00:24:31.328 Fused Compare & Write: Not Supported 00:24:31.328 Scatter-Gather List 00:24:31.328 SGL Command Set: Supported 00:24:31.328 SGL Keyed: Not Supported 00:24:31.328 SGL Bit Bucket Descriptor: Not Supported 00:24:31.328 SGL Metadata Pointer: Not Supported 00:24:31.328 Oversized SGL: Not Supported 00:24:31.328 SGL Metadata Address: Not Supported 00:24:31.328 SGL Offset: Supported 00:24:31.328 Transport SGL Data Block: Not Supported 00:24:31.328 Replay Protected Memory Block: Not Supported 00:24:31.328 00:24:31.328 Firmware Slot Information 00:24:31.328 ========================= 00:24:31.328 Active slot: 0 00:24:31.328 00:24:31.328 00:24:31.328 Error Log 00:24:31.328 ========= 00:24:31.328 00:24:31.328 Active Namespaces 00:24:31.329 ================= 00:24:31.329 Discovery Log Page 00:24:31.329 ================== 00:24:31.329 Generation Counter: 2 00:24:31.329 Number of Records: 2 00:24:31.329 Record Format: 0 00:24:31.329 00:24:31.329 Discovery Log Entry 0 00:24:31.329 ---------------------- 00:24:31.329 Transport Type: 3 (TCP) 00:24:31.329 Address Family: 1 (IPv4) 00:24:31.329 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:31.329 Entry Flags: 00:24:31.329 Duplicate Returned Information: 0 00:24:31.329 Explicit Persistent Connection Support for Discovery: 0 00:24:31.329 Transport Requirements: 00:24:31.329 Secure Channel: Not Specified 00:24:31.329 Port ID: 1 (0x0001) 00:24:31.329 Controller ID: 65535 (0xffff) 00:24:31.329 Admin Max SQ Size: 32 00:24:31.329 Transport Service Identifier: 4420 00:24:31.329 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:31.329 Transport Address: 10.0.0.1 00:24:31.329 Discovery Log Entry 1 00:24:31.329 ---------------------- 00:24:31.329 Transport Type: 3 (TCP) 00:24:31.329 Address Family: 1 (IPv4) 00:24:31.329 Subsystem Type: 2 (NVM Subsystem) 00:24:31.329 Entry Flags: 00:24:31.329 Duplicate Returned Information: 0 00:24:31.329 Explicit Persistent Connection Support for Discovery: 0 00:24:31.329 Transport Requirements: 00:24:31.329 Secure Channel: Not Specified 00:24:31.329 Port ID: 1 (0x0001) 00:24:31.329 Controller ID: 65535 (0xffff) 00:24:31.329 Admin Max SQ Size: 32 00:24:31.329 Transport Service Identifier: 4420 00:24:31.329 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:31.329 Transport Address: 10.0.0.1 00:24:31.329 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:31.329 get_feature(0x01) failed 00:24:31.329 get_feature(0x02) failed 00:24:31.329 get_feature(0x04) failed 00:24:31.329 ===================================================== 00:24:31.329 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:31.329 ===================================================== 00:24:31.329 Controller Capabilities/Features 00:24:31.329 ================================ 00:24:31.329 Vendor ID: 0000 00:24:31.329 Subsystem Vendor ID: 0000 00:24:31.329 Serial Number: 416a89362d247dd5d32f 00:24:31.329 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:31.329 Firmware Version: 6.8.9-20 00:24:31.329 Recommended Arb Burst: 6 00:24:31.329 IEEE OUI Identifier: 00 00 00 00:24:31.329 Multi-path I/O 00:24:31.329 May have multiple subsystem ports: Yes 00:24:31.329 May have multiple controllers: Yes 00:24:31.329 Associated with SR-IOV VF: No 00:24:31.329 Max Data Transfer Size: Unlimited 00:24:31.329 Max Number of Namespaces: 1024 00:24:31.329 Max Number of I/O Queues: 128 00:24:31.329 NVMe Specification Version (VS): 1.3 00:24:31.329 NVMe Specification Version (Identify): 1.3 00:24:31.329 Maximum Queue Entries: 1024 00:24:31.329 Contiguous Queues Required: No 00:24:31.329 Arbitration Mechanisms Supported 00:24:31.329 Weighted Round Robin: Not Supported 00:24:31.329 Vendor Specific: Not Supported 00:24:31.329 Reset Timeout: 7500 ms 00:24:31.329 Doorbell Stride: 4 bytes 00:24:31.329 NVM Subsystem Reset: Not Supported 00:24:31.329 Command Sets Supported 00:24:31.329 NVM Command Set: Supported 00:24:31.329 Boot Partition: Not Supported 00:24:31.329 Memory Page Size Minimum: 4096 bytes 00:24:31.329 Memory Page Size Maximum: 4096 bytes 00:24:31.329 Persistent Memory Region: Not Supported 00:24:31.329 Optional Asynchronous Events Supported 00:24:31.329 Namespace Attribute Notices: Supported 00:24:31.329 Firmware Activation Notices: Not Supported 00:24:31.329 ANA Change Notices: Supported 00:24:31.329 PLE Aggregate Log Change Notices: Not Supported 00:24:31.329 LBA Status Info Alert Notices: Not Supported 00:24:31.329 EGE Aggregate Log Change Notices: Not Supported 00:24:31.329 Normal NVM Subsystem Shutdown event: Not Supported 00:24:31.329 Zone Descriptor Change Notices: Not Supported 00:24:31.329 Discovery Log Change Notices: Not Supported 00:24:31.329 Controller Attributes 00:24:31.329 128-bit Host Identifier: Supported 00:24:31.329 Non-Operational Permissive Mode: Not Supported 00:24:31.329 NVM Sets: Not Supported 00:24:31.329 Read Recovery Levels: Not Supported 00:24:31.329 Endurance Groups: Not Supported 00:24:31.329 Predictable Latency Mode: Not Supported 00:24:31.329 Traffic Based Keep ALive: Supported 00:24:31.329 Namespace Granularity: Not Supported 00:24:31.329 SQ Associations: Not Supported 00:24:31.329 UUID List: Not Supported 00:24:31.329 Multi-Domain Subsystem: Not Supported 00:24:31.329 Fixed Capacity Management: Not Supported 00:24:31.329 Variable Capacity Management: Not Supported 00:24:31.329 Delete Endurance Group: Not Supported 00:24:31.329 Delete NVM Set: Not Supported 00:24:31.329 Extended LBA Formats Supported: Not Supported 00:24:31.329 Flexible Data Placement Supported: Not Supported 00:24:31.329 00:24:31.329 Controller Memory Buffer Support 00:24:31.329 ================================ 00:24:31.329 Supported: No 00:24:31.329 00:24:31.329 Persistent Memory Region Support 00:24:31.329 ================================ 00:24:31.329 Supported: No 00:24:31.329 00:24:31.329 Admin Command Set Attributes 00:24:31.329 ============================ 00:24:31.329 Security Send/Receive: Not Supported 00:24:31.329 Format NVM: Not Supported 00:24:31.329 Firmware Activate/Download: Not Supported 00:24:31.329 Namespace Management: Not Supported 00:24:31.329 Device Self-Test: Not Supported 00:24:31.329 Directives: Not Supported 00:24:31.329 NVMe-MI: Not Supported 00:24:31.329 Virtualization Management: Not Supported 00:24:31.329 Doorbell Buffer Config: Not Supported 00:24:31.329 Get LBA Status Capability: Not Supported 00:24:31.329 Command & Feature Lockdown Capability: Not Supported 00:24:31.329 Abort Command Limit: 4 00:24:31.329 Async Event Request Limit: 4 00:24:31.329 Number of Firmware Slots: N/A 00:24:31.329 Firmware Slot 1 Read-Only: N/A 00:24:31.329 Firmware Activation Without Reset: N/A 00:24:31.329 Multiple Update Detection Support: N/A 00:24:31.329 Firmware Update Granularity: No Information Provided 00:24:31.329 Per-Namespace SMART Log: Yes 00:24:31.329 Asymmetric Namespace Access Log Page: Supported 00:24:31.329 ANA Transition Time : 10 sec 00:24:31.329 00:24:31.329 Asymmetric Namespace Access Capabilities 00:24:31.329 ANA Optimized State : Supported 00:24:31.329 ANA Non-Optimized State : Supported 00:24:31.329 ANA Inaccessible State : Supported 00:24:31.329 ANA Persistent Loss State : Supported 00:24:31.329 ANA Change State : Supported 00:24:31.329 ANAGRPID is not changed : No 00:24:31.329 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:31.329 00:24:31.329 ANA Group Identifier Maximum : 128 00:24:31.329 Number of ANA Group Identifiers : 128 00:24:31.329 Max Number of Allowed Namespaces : 1024 00:24:31.329 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:31.329 Command Effects Log Page: Supported 00:24:31.329 Get Log Page Extended Data: Supported 00:24:31.329 Telemetry Log Pages: Not Supported 00:24:31.329 Persistent Event Log Pages: Not Supported 00:24:31.329 Supported Log Pages Log Page: May Support 00:24:31.329 Commands Supported & Effects Log Page: Not Supported 00:24:31.329 Feature Identifiers & Effects Log Page:May Support 00:24:31.329 NVMe-MI Commands & Effects Log Page: May Support 00:24:31.329 Data Area 4 for Telemetry Log: Not Supported 00:24:31.329 Error Log Page Entries Supported: 128 00:24:31.329 Keep Alive: Supported 00:24:31.329 Keep Alive Granularity: 1000 ms 00:24:31.329 00:24:31.329 NVM Command Set Attributes 00:24:31.329 ========================== 00:24:31.329 Submission Queue Entry Size 00:24:31.329 Max: 64 00:24:31.329 Min: 64 00:24:31.329 Completion Queue Entry Size 00:24:31.329 Max: 16 00:24:31.329 Min: 16 00:24:31.329 Number of Namespaces: 1024 00:24:31.329 Compare Command: Not Supported 00:24:31.329 Write Uncorrectable Command: Not Supported 00:24:31.329 Dataset Management Command: Supported 00:24:31.329 Write Zeroes Command: Supported 00:24:31.329 Set Features Save Field: Not Supported 00:24:31.329 Reservations: Not Supported 00:24:31.329 Timestamp: Not Supported 00:24:31.329 Copy: Not Supported 00:24:31.329 Volatile Write Cache: Present 00:24:31.329 Atomic Write Unit (Normal): 1 00:24:31.329 Atomic Write Unit (PFail): 1 00:24:31.329 Atomic Compare & Write Unit: 1 00:24:31.329 Fused Compare & Write: Not Supported 00:24:31.329 Scatter-Gather List 00:24:31.329 SGL Command Set: Supported 00:24:31.329 SGL Keyed: Not Supported 00:24:31.329 SGL Bit Bucket Descriptor: Not Supported 00:24:31.329 SGL Metadata Pointer: Not Supported 00:24:31.330 Oversized SGL: Not Supported 00:24:31.330 SGL Metadata Address: Not Supported 00:24:31.330 SGL Offset: Supported 00:24:31.330 Transport SGL Data Block: Not Supported 00:24:31.330 Replay Protected Memory Block: Not Supported 00:24:31.330 00:24:31.330 Firmware Slot Information 00:24:31.330 ========================= 00:24:31.330 Active slot: 0 00:24:31.330 00:24:31.330 Asymmetric Namespace Access 00:24:31.330 =========================== 00:24:31.330 Change Count : 0 00:24:31.330 Number of ANA Group Descriptors : 1 00:24:31.330 ANA Group Descriptor : 0 00:24:31.330 ANA Group ID : 1 00:24:31.330 Number of NSID Values : 1 00:24:31.330 Change Count : 0 00:24:31.330 ANA State : 1 00:24:31.330 Namespace Identifier : 1 00:24:31.330 00:24:31.330 Commands Supported and Effects 00:24:31.330 ============================== 00:24:31.330 Admin Commands 00:24:31.330 -------------- 00:24:31.330 Get Log Page (02h): Supported 00:24:31.330 Identify (06h): Supported 00:24:31.330 Abort (08h): Supported 00:24:31.330 Set Features (09h): Supported 00:24:31.330 Get Features (0Ah): Supported 00:24:31.330 Asynchronous Event Request (0Ch): Supported 00:24:31.330 Keep Alive (18h): Supported 00:24:31.330 I/O Commands 00:24:31.330 ------------ 00:24:31.330 Flush (00h): Supported 00:24:31.330 Write (01h): Supported LBA-Change 00:24:31.330 Read (02h): Supported 00:24:31.330 Write Zeroes (08h): Supported LBA-Change 00:24:31.330 Dataset Management (09h): Supported 00:24:31.330 00:24:31.330 Error Log 00:24:31.330 ========= 00:24:31.330 Entry: 0 00:24:31.330 Error Count: 0x3 00:24:31.330 Submission Queue Id: 0x0 00:24:31.330 Command Id: 0x5 00:24:31.330 Phase Bit: 0 00:24:31.330 Status Code: 0x2 00:24:31.330 Status Code Type: 0x0 00:24:31.330 Do Not Retry: 1 00:24:31.330 Error Location: 0x28 00:24:31.330 LBA: 0x0 00:24:31.330 Namespace: 0x0 00:24:31.330 Vendor Log Page: 0x0 00:24:31.330 ----------- 00:24:31.330 Entry: 1 00:24:31.330 Error Count: 0x2 00:24:31.330 Submission Queue Id: 0x0 00:24:31.330 Command Id: 0x5 00:24:31.330 Phase Bit: 0 00:24:31.330 Status Code: 0x2 00:24:31.330 Status Code Type: 0x0 00:24:31.330 Do Not Retry: 1 00:24:31.330 Error Location: 0x28 00:24:31.330 LBA: 0x0 00:24:31.330 Namespace: 0x0 00:24:31.330 Vendor Log Page: 0x0 00:24:31.330 ----------- 00:24:31.330 Entry: 2 00:24:31.330 Error Count: 0x1 00:24:31.330 Submission Queue Id: 0x0 00:24:31.330 Command Id: 0x4 00:24:31.330 Phase Bit: 0 00:24:31.330 Status Code: 0x2 00:24:31.330 Status Code Type: 0x0 00:24:31.330 Do Not Retry: 1 00:24:31.330 Error Location: 0x28 00:24:31.330 LBA: 0x0 00:24:31.330 Namespace: 0x0 00:24:31.330 Vendor Log Page: 0x0 00:24:31.330 00:24:31.330 Number of Queues 00:24:31.330 ================ 00:24:31.330 Number of I/O Submission Queues: 128 00:24:31.330 Number of I/O Completion Queues: 128 00:24:31.330 00:24:31.330 ZNS Specific Controller Data 00:24:31.330 ============================ 00:24:31.330 Zone Append Size Limit: 0 00:24:31.330 00:24:31.330 00:24:31.330 Active Namespaces 00:24:31.330 ================= 00:24:31.330 get_feature(0x05) failed 00:24:31.330 Namespace ID:1 00:24:31.330 Command Set Identifier: NVM (00h) 00:24:31.330 Deallocate: Supported 00:24:31.330 Deallocated/Unwritten Error: Not Supported 00:24:31.330 Deallocated Read Value: Unknown 00:24:31.330 Deallocate in Write Zeroes: Not Supported 00:24:31.330 Deallocated Guard Field: 0xFFFF 00:24:31.330 Flush: Supported 00:24:31.330 Reservation: Not Supported 00:24:31.330 Namespace Sharing Capabilities: Multiple Controllers 00:24:31.330 Size (in LBAs): 1953525168 (931GiB) 00:24:31.330 Capacity (in LBAs): 1953525168 (931GiB) 00:24:31.330 Utilization (in LBAs): 1953525168 (931GiB) 00:24:31.330 UUID: 40ced083-db94-4deb-b8fd-4bdc0b623ec3 00:24:31.330 Thin Provisioning: Not Supported 00:24:31.330 Per-NS Atomic Units: Yes 00:24:31.330 Atomic Boundary Size (Normal): 0 00:24:31.330 Atomic Boundary Size (PFail): 0 00:24:31.330 Atomic Boundary Offset: 0 00:24:31.330 NGUID/EUI64 Never Reused: No 00:24:31.330 ANA group ID: 1 00:24:31.330 Namespace Write Protected: No 00:24:31.330 Number of LBA Formats: 1 00:24:31.330 Current LBA Format: LBA Format #00 00:24:31.330 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:31.330 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.330 rmmod nvme_tcp 00:24:31.330 rmmod nvme_fabrics 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.330 09:15:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:33.859 09:15:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:34.796 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:34.796 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:34.796 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:34.796 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:34.796 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:34.796 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:34.796 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:34.796 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:34.796 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:34.796 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:34.796 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:34.796 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:34.796 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:34.796 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:34.796 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:34.796 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:35.730 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:35.988 00:24:35.988 real 0m9.940s 00:24:35.988 user 0m2.173s 00:24:35.988 sys 0m3.696s 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.988 ************************************ 00:24:35.988 END TEST nvmf_identify_kernel_target 00:24:35.988 ************************************ 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.988 ************************************ 00:24:35.988 START TEST nvmf_auth_host 00:24:35.988 ************************************ 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:35.988 * Looking for test storage... 00:24:35.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:35.988 09:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.248 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:36.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.249 --rc genhtml_branch_coverage=1 00:24:36.249 --rc genhtml_function_coverage=1 00:24:36.249 --rc genhtml_legend=1 00:24:36.249 --rc geninfo_all_blocks=1 00:24:36.249 --rc geninfo_unexecuted_blocks=1 00:24:36.249 00:24:36.249 ' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:36.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.249 --rc genhtml_branch_coverage=1 00:24:36.249 --rc genhtml_function_coverage=1 00:24:36.249 --rc genhtml_legend=1 00:24:36.249 --rc geninfo_all_blocks=1 00:24:36.249 --rc geninfo_unexecuted_blocks=1 00:24:36.249 00:24:36.249 ' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:36.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.249 --rc genhtml_branch_coverage=1 00:24:36.249 --rc genhtml_function_coverage=1 00:24:36.249 --rc genhtml_legend=1 00:24:36.249 --rc geninfo_all_blocks=1 00:24:36.249 --rc geninfo_unexecuted_blocks=1 00:24:36.249 00:24:36.249 ' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:36.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.249 --rc genhtml_branch_coverage=1 00:24:36.249 --rc genhtml_function_coverage=1 00:24:36.249 --rc genhtml_legend=1 00:24:36.249 --rc geninfo_all_blocks=1 00:24:36.249 --rc geninfo_unexecuted_blocks=1 00:24:36.249 00:24:36.249 ' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.249 09:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:38.155 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:38.155 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:38.155 Found net devices under 0000:09:00.0: cvl_0_0 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:38.155 Found net devices under 0000:09:00.1: cvl_0_1 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.155 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.156 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:24:38.414 00:24:38.414 --- 10.0.0.2 ping statistics --- 00:24:38.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.414 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:24:38.414 00:24:38.414 --- 10.0.0.1 ping statistics --- 00:24:38.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.414 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3422021 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3422021 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3422021 ']' 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:38.414 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.672 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5bdf56406916db6f5a9143bb3133d3a2 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FQ4 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5bdf56406916db6f5a9143bb3133d3a2 0 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5bdf56406916db6f5a9143bb3133d3a2 0 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5bdf56406916db6f5a9143bb3133d3a2 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FQ4 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FQ4 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.FQ4 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3e7823af5df1d1c8728d1702dc868b5e53540d969290963c7f418c25548d835b 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yZR 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3e7823af5df1d1c8728d1702dc868b5e53540d969290963c7f418c25548d835b 3 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3e7823af5df1d1c8728d1702dc868b5e53540d969290963c7f418c25548d835b 3 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3e7823af5df1d1c8728d1702dc868b5e53540d969290963c7f418c25548d835b 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yZR 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yZR 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.yZR 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e08b0f0aee305c5f68cabf99db1a8690b6121e8d4658a599 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kdJ 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e08b0f0aee305c5f68cabf99db1a8690b6121e8d4658a599 0 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e08b0f0aee305c5f68cabf99db1a8690b6121e8d4658a599 0 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e08b0f0aee305c5f68cabf99db1a8690b6121e8d4658a599 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:38.673 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kdJ 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kdJ 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kdJ 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=034268b79982ef7b555707be1503575e67d84a5e1b11d300 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jk2 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 034268b79982ef7b555707be1503575e67d84a5e1b11d300 2 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 034268b79982ef7b555707be1503575e67d84a5e1b11d300 2 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=034268b79982ef7b555707be1503575e67d84a5e1b11d300 00:24:38.931 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jk2 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jk2 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jk2 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=70b86615ac2787f30d202f5fd101732c 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RUL 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 70b86615ac2787f30d202f5fd101732c 1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 70b86615ac2787f30d202f5fd101732c 1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=70b86615ac2787f30d202f5fd101732c 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RUL 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RUL 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.RUL 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1a1dc0e028b81fffd8282cc4b4070381 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Li5 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1a1dc0e028b81fffd8282cc4b4070381 1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1a1dc0e028b81fffd8282cc4b4070381 1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1a1dc0e028b81fffd8282cc4b4070381 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Li5 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Li5 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Li5 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=03b111fe08edd2b011f02cfbe875ab0db2fa7b2d00f615f9 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gNN 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 03b111fe08edd2b011f02cfbe875ab0db2fa7b2d00f615f9 2 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 03b111fe08edd2b011f02cfbe875ab0db2fa7b2d00f615f9 2 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=03b111fe08edd2b011f02cfbe875ab0db2fa7b2d00f615f9 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gNN 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gNN 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.gNN 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=201f3a304e08877add539ab7f769e3f8 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7eN 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 201f3a304e08877add539ab7f769e3f8 0 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 201f3a304e08877add539ab7f769e3f8 0 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=201f3a304e08877add539ab7f769e3f8 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:38.932 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7eN 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7eN 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.7eN 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2fe10b0397668a2e7c9de0477715aa182adc19e1c04fa86b78708de381e381c9 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Xdu 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2fe10b0397668a2e7c9de0477715aa182adc19e1c04fa86b78708de381e381c9 3 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2fe10b0397668a2e7c9de0477715aa182adc19e1c04fa86b78708de381e381c9 3 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2fe10b0397668a2e7c9de0477715aa182adc19e1c04fa86b78708de381e381c9 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:39.191 09:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Xdu 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Xdu 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Xdu 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3422021 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3422021 ']' 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:39.191 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FQ4 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.yZR ]] 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yZR 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kdJ 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.449 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jk2 ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jk2 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.RUL 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Li5 ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Li5 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.gNN 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.7eN ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.7eN 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Xdu 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:39.450 09:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:40.822 Waiting for block devices as requested 00:24:40.822 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:40.822 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:40.822 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:40.822 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:41.079 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:41.079 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:41.079 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:41.079 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:41.337 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:24:41.337 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:41.595 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:41.595 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:41.595 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:41.595 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:41.853 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:41.853 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:41.853 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:42.418 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:42.418 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:42.418 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:42.418 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:42.418 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:42.419 No valid GPT data, bailing 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:24:42.419 00:24:42.419 Discovery Log Number of Records 2, Generation counter 2 00:24:42.419 =====Discovery Log Entry 0====== 00:24:42.419 trtype: tcp 00:24:42.419 adrfam: ipv4 00:24:42.419 subtype: current discovery subsystem 00:24:42.419 treq: not specified, sq flow control disable supported 00:24:42.419 portid: 1 00:24:42.419 trsvcid: 4420 00:24:42.419 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:42.419 traddr: 10.0.0.1 00:24:42.419 eflags: none 00:24:42.419 sectype: none 00:24:42.419 =====Discovery Log Entry 1====== 00:24:42.419 trtype: tcp 00:24:42.419 adrfam: ipv4 00:24:42.419 subtype: nvme subsystem 00:24:42.419 treq: not specified, sq flow control disable supported 00:24:42.419 portid: 1 00:24:42.419 trsvcid: 4420 00:24:42.419 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:42.419 traddr: 10.0.0.1 00:24:42.419 eflags: none 00:24:42.419 sectype: none 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.419 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.676 nvme0n1 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.676 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.677 nvme0n1 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.677 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.934 nvme0n1 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.934 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.192 09:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.192 nvme0n1 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.192 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.193 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.450 nvme0n1 00:24:43.450 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.450 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.450 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.450 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.450 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.451 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.708 nvme0n1 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.708 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.965 nvme0n1 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:43.965 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.966 09:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.223 nvme0n1 00:24:44.223 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.223 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.223 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.223 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.224 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 nvme0n1 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.481 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.482 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.739 nvme0n1 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.739 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.740 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.997 nvme0n1 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.997 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.998 09:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.255 nvme0n1 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:45.255 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.513 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.771 nvme0n1 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.771 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.030 nvme0n1 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.030 09:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.289 nvme0n1 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.289 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.547 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.804 nvme0n1 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:46.804 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.805 09:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.368 nvme0n1 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.368 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.369 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.369 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.369 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.369 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 nvme0n1 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.933 09:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.500 nvme0n1 00:24:48.500 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.500 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.500 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.500 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.500 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.500 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.500 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.500 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.501 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 nvme0n1 00:24:48.759 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.759 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.759 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.759 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.759 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.759 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.017 09:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.582 nvme0n1 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.582 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.583 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.583 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.583 09:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.515 nvme0n1 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.515 09:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.446 nvme0n1 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.446 09:15:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.008 nvme0n1 00:24:52.008 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.265 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.265 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.265 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.265 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.265 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.265 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.266 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.198 nvme0n1 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.198 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.199 09:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.131 nvme0n1 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.131 09:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.131 nvme0n1 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.131 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.132 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.389 nvme0n1 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.389 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.390 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.647 nvme0n1 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.648 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.906 nvme0n1 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.906 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.164 nvme0n1 00:24:55.164 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.164 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.164 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.164 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.164 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.164 09:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:55.164 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.165 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.423 nvme0n1 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.423 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.424 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.424 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.424 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.683 nvme0n1 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.683 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.684 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.941 nvme0n1 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.941 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.942 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.199 nvme0n1 00:24:56.199 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.199 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.199 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.199 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.199 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.199 09:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.199 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.199 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.199 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.199 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.199 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.199 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.200 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.458 nvme0n1 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.458 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.459 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.717 nvme0n1 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.717 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.975 nvme0n1 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.975 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.976 09:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.233 nvme0n1 00:24:57.233 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.233 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.233 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.233 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.233 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.233 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.491 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.750 nvme0n1 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.750 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.751 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.751 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:57.751 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.751 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.010 nvme0n1 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.010 09:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.576 nvme0n1 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.576 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.577 09:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.143 nvme0n1 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.143 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.144 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.710 nvme0n1 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.710 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.711 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.711 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.711 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.711 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.711 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.711 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.711 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.711 09:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.276 nvme0n1 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.276 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.277 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.915 nvme0n1 00:25:00.915 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.915 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.915 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.916 09:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.847 nvme0n1 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.847 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.848 09:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.780 nvme0n1 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.780 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.781 09:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.712 nvme0n1 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.712 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.713 09:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.645 nvme0n1 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:04.645 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.646 09:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.579 nvme0n1 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.579 nvme0n1 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.579 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.838 nvme0n1 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.838 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.839 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.839 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.839 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.839 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.839 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.839 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.839 09:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.098 nvme0n1 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.098 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.358 nvme0n1 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.358 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.359 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.617 nvme0n1 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.617 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.618 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.875 nvme0n1 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.875 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.876 09:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.133 nvme0n1 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.133 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.391 nvme0n1 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.391 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.648 nvme0n1 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.648 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.649 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.906 nvme0n1 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.906 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.907 09:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.165 nvme0n1 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.165 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.166 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.425 nvme0n1 00:25:08.425 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.425 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.425 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.425 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.425 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.425 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.683 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.684 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.684 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.684 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.684 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.684 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.684 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.684 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.684 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 nvme0n1 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.941 09:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.198 nvme0n1 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.198 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.199 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.199 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.199 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.199 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.199 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.199 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.199 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.455 nvme0n1 00:25:09.455 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.455 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.455 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.455 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.455 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.455 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.455 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.712 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.713 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.713 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.713 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.713 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.713 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:09.713 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.713 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.278 nvme0n1 00:25:10.278 09:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.278 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.536 nvme0n1 00:25:10.536 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.793 09:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.358 nvme0n1 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.358 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.923 nvme0n1 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.923 09:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.488 nvme0n1 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWJkZjU2NDA2OTE2ZGI2ZjVhOTE0M2JiMzEzM2QzYTIn25OK: 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: ]] 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2U3ODIzYWY1ZGYxZDFjODcyOGQxNzAyZGM4NjhiNWU1MzU0MGQ5NjkyOTA5NjNjN2Y0MThjMjU1NDhkODM1Yl/Fq/4=: 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.488 09:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.421 nvme0n1 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.421 09:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.354 nvme0n1 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.354 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.287 nvme0n1 00:25:15.287 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.287 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.287 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.287 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.287 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.287 09:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDNiMTExZmUwOGVkZDJiMDExZjAyY2ZiZTg3NWFiMGRiMmZhN2IyZDAwZjYxNWY5lPWFqg==: 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: ]] 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjAxZjNhMzA0ZTA4ODc3YWRkNTM5YWI3Zjc2OWUzZjhPv3HC: 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.287 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.288 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.288 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.288 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.288 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.288 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.288 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.288 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.288 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.288 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.220 nvme0n1 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZlMTBiMDM5NzY2OGEyZTdjOWRlMDQ3NzcxNWFhMTgyYWRjMTllMWMwNGZhODZiNzg3MDhkZTM4MWUzODFjOf26/Fw=: 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.220 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.221 09:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.785 nvme0n1 00:25:16.786 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.786 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.786 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.786 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.786 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.786 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.045 request: 00:25:17.045 { 00:25:17.045 "name": "nvme0", 00:25:17.045 "trtype": "tcp", 00:25:17.045 "traddr": "10.0.0.1", 00:25:17.045 "adrfam": "ipv4", 00:25:17.045 "trsvcid": "4420", 00:25:17.045 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:17.045 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:17.045 "prchk_reftag": false, 00:25:17.045 "prchk_guard": false, 00:25:17.045 "hdgst": false, 00:25:17.045 "ddgst": false, 00:25:17.045 "allow_unrecognized_csi": false, 00:25:17.045 "method": "bdev_nvme_attach_controller", 00:25:17.045 "req_id": 1 00:25:17.045 } 00:25:17.045 Got JSON-RPC error response 00:25:17.045 response: 00:25:17.045 { 00:25:17.045 "code": -5, 00:25:17.045 "message": "Input/output error" 00:25:17.045 } 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.045 09:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.045 request: 00:25:17.045 { 00:25:17.045 "name": "nvme0", 00:25:17.045 "trtype": "tcp", 00:25:17.045 "traddr": "10.0.0.1", 00:25:17.045 "adrfam": "ipv4", 00:25:17.304 "trsvcid": "4420", 00:25:17.304 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:17.304 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:17.304 "prchk_reftag": false, 00:25:17.304 "prchk_guard": false, 00:25:17.304 "hdgst": false, 00:25:17.304 "ddgst": false, 00:25:17.304 "dhchap_key": "key2", 00:25:17.304 "allow_unrecognized_csi": false, 00:25:17.304 "method": "bdev_nvme_attach_controller", 00:25:17.304 "req_id": 1 00:25:17.304 } 00:25:17.304 Got JSON-RPC error response 00:25:17.304 response: 00:25:17.304 { 00:25:17.304 "code": -5, 00:25:17.304 "message": "Input/output error" 00:25:17.304 } 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.304 request: 00:25:17.304 { 00:25:17.304 "name": "nvme0", 00:25:17.304 "trtype": "tcp", 00:25:17.304 "traddr": "10.0.0.1", 00:25:17.304 "adrfam": "ipv4", 00:25:17.304 "trsvcid": "4420", 00:25:17.304 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:17.304 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:17.304 "prchk_reftag": false, 00:25:17.304 "prchk_guard": false, 00:25:17.304 "hdgst": false, 00:25:17.304 "ddgst": false, 00:25:17.304 "dhchap_key": "key1", 00:25:17.304 "dhchap_ctrlr_key": "ckey2", 00:25:17.304 "allow_unrecognized_csi": false, 00:25:17.304 "method": "bdev_nvme_attach_controller", 00:25:17.304 "req_id": 1 00:25:17.304 } 00:25:17.304 Got JSON-RPC error response 00:25:17.304 response: 00:25:17.304 { 00:25:17.304 "code": -5, 00:25:17.304 "message": "Input/output error" 00:25:17.304 } 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:17.304 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.305 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.562 nvme0n1 00:25:17.562 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.562 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:17.562 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.562 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.562 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:17.562 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.563 request: 00:25:17.563 { 00:25:17.563 "name": "nvme0", 00:25:17.563 "dhchap_key": "key1", 00:25:17.563 "dhchap_ctrlr_key": "ckey2", 00:25:17.563 "method": "bdev_nvme_set_keys", 00:25:17.563 "req_id": 1 00:25:17.563 } 00:25:17.563 Got JSON-RPC error response 00:25:17.563 response: 00:25:17.563 { 00:25:17.563 "code": -13, 00:25:17.563 "message": "Permission denied" 00:25:17.563 } 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:17.563 09:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:18.935 09:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.935 09:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:18.935 09:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.935 09:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.935 09:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.935 09:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:18.935 09:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTA4YjBmMGFlZTMwNWM1ZjY4Y2FiZjk5ZGIxYTg2OTBiNjEyMWU4ZDQ2NThhNTk5IcycYg==: 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: ]] 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDM0MjY4Yjc5OTgyZWY3YjU1NTcwN2JlMTUwMzU3NWU2N2Q4NGE1ZTFiMTFkMzAwg9tDPw==: 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.868 nvme0n1 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzBiODY2MTVhYzI3ODdmMzBkMjAyZjVmZDEwMTczMmNJ/ag+: 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: ]] 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWExZGMwZTAyOGI4MWZmZmQ4MjgyY2M0YjQwNzAzODGjYegU: 00:25:19.868 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.869 request: 00:25:19.869 { 00:25:19.869 "name": "nvme0", 00:25:19.869 "dhchap_key": "key2", 00:25:19.869 "dhchap_ctrlr_key": "ckey1", 00:25:19.869 "method": "bdev_nvme_set_keys", 00:25:19.869 "req_id": 1 00:25:19.869 } 00:25:19.869 Got JSON-RPC error response 00:25:19.869 response: 00:25:19.869 { 00:25:19.869 "code": -13, 00:25:19.869 "message": "Permission denied" 00:25:19.869 } 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.869 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.126 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:20.126 09:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:21.057 09:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.057 09:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:21.057 09:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.057 09:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.057 09:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.057 09:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:21.057 09:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.988 09:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.988 rmmod nvme_tcp 00:25:21.988 rmmod nvme_fabrics 00:25:22.245 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:22.245 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:22.245 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3422021 ']' 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3422021 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3422021 ']' 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3422021 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3422021 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3422021' 00:25:22.246 killing process with pid 3422021 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3422021 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3422021 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:22.246 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:22.502 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:22.502 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:22.502 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.502 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.502 09:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:24.404 09:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:25.781 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:25.781 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:25.781 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:25.781 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:25.781 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:25.781 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:25.781 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:25.781 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:25.781 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:25.781 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:25.781 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:25.781 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:25.781 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:25.781 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:25.781 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:25.781 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:26.717 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:25:26.975 09:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.FQ4 /tmp/spdk.key-null.kdJ /tmp/spdk.key-sha256.RUL /tmp/spdk.key-sha384.gNN /tmp/spdk.key-sha512.Xdu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:26.975 09:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:27.910 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:27.910 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:27.910 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:27.910 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:27.910 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:27.910 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:27.910 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:28.169 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:28.169 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:28.169 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:28.169 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:28.169 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:28.169 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:28.169 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:28.169 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:28.169 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:28.169 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:28.169 00:25:28.169 real 0m52.236s 00:25:28.169 user 0m49.886s 00:25:28.169 sys 0m6.203s 00:25:28.169 09:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:28.169 09:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.169 ************************************ 00:25:28.169 END TEST nvmf_auth_host 00:25:28.169 ************************************ 00:25:28.169 09:16:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:28.169 09:16:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:28.169 09:16:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:28.169 09:16:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:28.169 09:16:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.169 ************************************ 00:25:28.169 START TEST nvmf_digest 00:25:28.169 ************************************ 00:25:28.169 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:28.427 * Looking for test storage... 00:25:28.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:28.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.427 --rc genhtml_branch_coverage=1 00:25:28.427 --rc genhtml_function_coverage=1 00:25:28.427 --rc genhtml_legend=1 00:25:28.427 --rc geninfo_all_blocks=1 00:25:28.427 --rc geninfo_unexecuted_blocks=1 00:25:28.427 00:25:28.427 ' 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:28.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.427 --rc genhtml_branch_coverage=1 00:25:28.427 --rc genhtml_function_coverage=1 00:25:28.427 --rc genhtml_legend=1 00:25:28.427 --rc geninfo_all_blocks=1 00:25:28.427 --rc geninfo_unexecuted_blocks=1 00:25:28.427 00:25:28.427 ' 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:28.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.427 --rc genhtml_branch_coverage=1 00:25:28.427 --rc genhtml_function_coverage=1 00:25:28.427 --rc genhtml_legend=1 00:25:28.427 --rc geninfo_all_blocks=1 00:25:28.427 --rc geninfo_unexecuted_blocks=1 00:25:28.427 00:25:28.427 ' 00:25:28.427 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:28.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.427 --rc genhtml_branch_coverage=1 00:25:28.427 --rc genhtml_function_coverage=1 00:25:28.427 --rc genhtml_legend=1 00:25:28.427 --rc geninfo_all_blocks=1 00:25:28.427 --rc geninfo_unexecuted_blocks=1 00:25:28.427 00:25:28.427 ' 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:28.428 09:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:30.953 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:30.953 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:30.953 Found net devices under 0000:09:00.0: cvl_0_0 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:30.953 Found net devices under 0000:09:00.1: cvl_0_1 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:25:30.953 00:25:30.953 --- 10.0.0.2 ping statistics --- 00:25:30.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.953 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:25:30.953 00:25:30.953 --- 10.0.0.1 ping statistics --- 00:25:30.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.953 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:30.953 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:30.954 ************************************ 00:25:30.954 START TEST nvmf_digest_clean 00:25:30.954 ************************************ 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3431765 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3431765 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3431765 ']' 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:30.954 [2024-11-20 09:16:07.684652] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:30.954 [2024-11-20 09:16:07.684744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.954 [2024-11-20 09:16:07.755437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.954 [2024-11-20 09:16:07.811768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.954 [2024-11-20 09:16:07.811820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.954 [2024-11-20 09:16:07.811853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.954 [2024-11-20 09:16:07.811864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.954 [2024-11-20 09:16:07.811873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.954 [2024-11-20 09:16:07.812490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.954 09:16:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:31.212 null0 00:25:31.212 [2024-11-20 09:16:08.036821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.212 [2024-11-20 09:16:08.061031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3431789 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3431789 /var/tmp/bperf.sock 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3431789 ']' 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:31.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:31.212 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:31.212 [2024-11-20 09:16:08.109457] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:31.212 [2024-11-20 09:16:08.109537] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431789 ] 00:25:31.212 [2024-11-20 09:16:08.175479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.212 [2024-11-20 09:16:08.234471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.469 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:31.469 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:31.469 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:31.469 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:31.469 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:32.032 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.032 09:16:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.289 nvme0n1 00:25:32.289 09:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:32.289 09:16:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:32.289 Running I/O for 2 seconds... 00:25:34.646 18407.00 IOPS, 71.90 MiB/s [2024-11-20T08:16:11.675Z] 18475.00 IOPS, 72.17 MiB/s 00:25:34.646 Latency(us) 00:25:34.646 [2024-11-20T08:16:11.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.646 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:34.646 nvme0n1 : 2.04 18140.42 70.86 0.00 0.00 6914.82 3495.25 45632.47 00:25:34.646 [2024-11-20T08:16:11.675Z] =================================================================================================================== 00:25:34.646 [2024-11-20T08:16:11.675Z] Total : 18140.42 70.86 0.00 0.00 6914.82 3495.25 45632.47 00:25:34.646 { 00:25:34.646 "results": [ 00:25:34.646 { 00:25:34.646 "job": "nvme0n1", 00:25:34.646 "core_mask": "0x2", 00:25:34.646 "workload": "randread", 00:25:34.646 "status": "finished", 00:25:34.646 "queue_depth": 128, 00:25:34.646 "io_size": 4096, 00:25:34.646 "runtime": 2.043944, 00:25:34.646 "iops": 18140.41871988665, 00:25:34.646 "mibps": 70.86101062455722, 00:25:34.646 "io_failed": 0, 00:25:34.646 "io_timeout": 0, 00:25:34.646 "avg_latency_us": 6914.816018923071, 00:25:34.646 "min_latency_us": 3495.2533333333336, 00:25:34.646 "max_latency_us": 45632.474074074074 00:25:34.646 } 00:25:34.646 ], 00:25:34.646 "core_count": 1 00:25:34.646 } 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:34.646 | select(.opcode=="crc32c") 00:25:34.646 | "\(.module_name) \(.executed)"' 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3431789 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3431789 ']' 00:25:34.646 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3431789 00:25:34.647 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:34.647 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:34.647 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3431789 00:25:34.647 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:34.647 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:34.647 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3431789' 00:25:34.647 killing process with pid 3431789 00:25:34.647 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3431789 00:25:34.647 Received shutdown signal, test time was about 2.000000 seconds 00:25:34.647 00:25:34.647 Latency(us) 00:25:34.647 [2024-11-20T08:16:11.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.647 [2024-11-20T08:16:11.676Z] =================================================================================================================== 00:25:34.647 [2024-11-20T08:16:11.676Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:34.647 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3431789 00:25:34.904 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:34.904 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:34.904 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:34.904 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:34.904 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3432322 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3432322 /var/tmp/bperf.sock 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3432322 ']' 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:34.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:34.905 09:16:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:34.905 [2024-11-20 09:16:11.931732] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:34.905 [2024-11-20 09:16:11.931819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432322 ] 00:25:34.905 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:34.905 Zero copy mechanism will not be used. 00:25:35.163 [2024-11-20 09:16:11.997866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.163 [2024-11-20 09:16:12.052913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.163 09:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:35.163 09:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:35.163 09:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:35.163 09:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:35.163 09:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:35.729 09:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.729 09:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.986 nvme0n1 00:25:35.986 09:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:35.986 09:16:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:36.243 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:36.243 Zero copy mechanism will not be used. 00:25:36.243 Running I/O for 2 seconds... 00:25:38.106 6097.00 IOPS, 762.12 MiB/s [2024-11-20T08:16:15.393Z] 5945.50 IOPS, 743.19 MiB/s 00:25:38.364 Latency(us) 00:25:38.364 [2024-11-20T08:16:15.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.364 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:38.364 nvme0n1 : 2.04 5827.00 728.37 0.00 0.00 2693.75 743.35 43690.67 00:25:38.364 [2024-11-20T08:16:15.393Z] =================================================================================================================== 00:25:38.364 [2024-11-20T08:16:15.393Z] Total : 5827.00 728.37 0.00 0.00 2693.75 743.35 43690.67 00:25:38.364 { 00:25:38.364 "results": [ 00:25:38.364 { 00:25:38.364 "job": "nvme0n1", 00:25:38.364 "core_mask": "0x2", 00:25:38.364 "workload": "randread", 00:25:38.364 "status": "finished", 00:25:38.364 "queue_depth": 16, 00:25:38.364 "io_size": 131072, 00:25:38.364 "runtime": 2.04342, 00:25:38.364 "iops": 5826.995918607041, 00:25:38.364 "mibps": 728.3744898258801, 00:25:38.364 "io_failed": 0, 00:25:38.364 "io_timeout": 0, 00:25:38.364 "avg_latency_us": 2693.7501876580536, 00:25:38.364 "min_latency_us": 743.3481481481482, 00:25:38.364 "max_latency_us": 43690.666666666664 00:25:38.364 } 00:25:38.364 ], 00:25:38.364 "core_count": 1 00:25:38.364 } 00:25:38.364 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:38.364 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:38.364 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:38.364 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:38.364 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:38.364 | select(.opcode=="crc32c") 00:25:38.364 | "\(.module_name) \(.executed)"' 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3432322 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3432322 ']' 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3432322 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3432322 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3432322' 00:25:38.622 killing process with pid 3432322 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3432322 00:25:38.622 Received shutdown signal, test time was about 2.000000 seconds 00:25:38.622 00:25:38.622 Latency(us) 00:25:38.622 [2024-11-20T08:16:15.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.622 [2024-11-20T08:16:15.651Z] =================================================================================================================== 00:25:38.622 [2024-11-20T08:16:15.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.622 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3432322 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3432732 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3432732 /var/tmp/bperf.sock 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3432732 ']' 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:38.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:38.879 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 [2024-11-20 09:16:15.739981] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:38.879 [2024-11-20 09:16:15.740067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432732 ] 00:25:38.879 [2024-11-20 09:16:15.805277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.879 [2024-11-20 09:16:15.859776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.136 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:39.136 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:39.136 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:39.136 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:39.136 09:16:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:39.393 09:16:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.393 09:16:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.957 nvme0n1 00:25:39.957 09:16:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:39.957 09:16:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:39.957 Running I/O for 2 seconds... 00:25:42.264 19193.00 IOPS, 74.97 MiB/s [2024-11-20T08:16:19.293Z] 18936.50 IOPS, 73.97 MiB/s 00:25:42.264 Latency(us) 00:25:42.264 [2024-11-20T08:16:19.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.264 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:42.264 nvme0n1 : 2.01 18935.14 73.97 0.00 0.00 6744.63 2572.89 9272.13 00:25:42.264 [2024-11-20T08:16:19.293Z] =================================================================================================================== 00:25:42.264 [2024-11-20T08:16:19.294Z] Total : 18935.14 73.97 0.00 0.00 6744.63 2572.89 9272.13 00:25:42.265 { 00:25:42.265 "results": [ 00:25:42.265 { 00:25:42.265 "job": "nvme0n1", 00:25:42.265 "core_mask": "0x2", 00:25:42.265 "workload": "randwrite", 00:25:42.265 "status": "finished", 00:25:42.265 "queue_depth": 128, 00:25:42.265 "io_size": 4096, 00:25:42.265 "runtime": 2.006481, 00:25:42.265 "iops": 18935.14067663736, 00:25:42.265 "mibps": 73.96539326811468, 00:25:42.265 "io_failed": 0, 00:25:42.265 "io_timeout": 0, 00:25:42.265 "avg_latency_us": 6744.6314304681855, 00:25:42.265 "min_latency_us": 2572.8948148148147, 00:25:42.265 "max_latency_us": 9272.13037037037 00:25:42.265 } 00:25:42.265 ], 00:25:42.265 "core_count": 1 00:25:42.265 } 00:25:42.265 09:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:42.265 09:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:42.265 09:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:42.265 09:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:42.265 | select(.opcode=="crc32c") 00:25:42.265 | "\(.module_name) \(.executed)"' 00:25:42.265 09:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3432732 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3432732 ']' 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3432732 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3432732 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3432732' 00:25:42.265 killing process with pid 3432732 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3432732 00:25:42.265 Received shutdown signal, test time was about 2.000000 seconds 00:25:42.265 00:25:42.265 Latency(us) 00:25:42.265 [2024-11-20T08:16:19.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.265 [2024-11-20T08:16:19.294Z] =================================================================================================================== 00:25:42.265 [2024-11-20T08:16:19.294Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.265 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3432732 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3433160 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3433160 /var/tmp/bperf.sock 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3433160 ']' 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:42.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.523 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:42.781 [2024-11-20 09:16:19.558257] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:42.781 [2024-11-20 09:16:19.558386] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433160 ] 00:25:42.781 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:42.781 Zero copy mechanism will not be used. 00:25:42.781 [2024-11-20 09:16:19.628431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.781 [2024-11-20 09:16:19.686336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.781 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:42.781 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:42.781 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:42.781 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:42.781 09:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:43.346 09:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.346 09:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.911 nvme0n1 00:25:43.911 09:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:43.911 09:16:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:43.911 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:43.911 Zero copy mechanism will not be used. 00:25:43.911 Running I/O for 2 seconds... 00:25:45.775 5898.00 IOPS, 737.25 MiB/s [2024-11-20T08:16:22.804Z] 6050.00 IOPS, 756.25 MiB/s 00:25:45.775 Latency(us) 00:25:45.775 [2024-11-20T08:16:22.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.775 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:45.775 nvme0n1 : 2.00 6049.16 756.14 0.00 0.00 2638.32 1577.72 5801.15 00:25:45.775 [2024-11-20T08:16:22.804Z] =================================================================================================================== 00:25:45.775 [2024-11-20T08:16:22.804Z] Total : 6049.16 756.14 0.00 0.00 2638.32 1577.72 5801.15 00:25:45.775 { 00:25:45.775 "results": [ 00:25:45.775 { 00:25:45.775 "job": "nvme0n1", 00:25:45.775 "core_mask": "0x2", 00:25:45.775 "workload": "randwrite", 00:25:45.775 "status": "finished", 00:25:45.775 "queue_depth": 16, 00:25:45.775 "io_size": 131072, 00:25:45.775 "runtime": 2.00375, 00:25:45.775 "iops": 6049.157829070493, 00:25:45.775 "mibps": 756.1447286338116, 00:25:45.775 "io_failed": 0, 00:25:45.775 "io_timeout": 0, 00:25:45.775 "avg_latency_us": 2638.320776124693, 00:25:45.775 "min_latency_us": 1577.7185185185185, 00:25:45.775 "max_latency_us": 5801.14962962963 00:25:45.775 } 00:25:45.775 ], 00:25:45.775 "core_count": 1 00:25:45.775 } 00:25:45.775 09:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:45.775 09:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:46.032 09:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:46.032 09:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:46.032 09:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:46.032 | select(.opcode=="crc32c") 00:25:46.032 | "\(.module_name) \(.executed)"' 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3433160 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3433160 ']' 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3433160 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3433160 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3433160' 00:25:46.290 killing process with pid 3433160 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3433160 00:25:46.290 Received shutdown signal, test time was about 2.000000 seconds 00:25:46.290 00:25:46.290 Latency(us) 00:25:46.290 [2024-11-20T08:16:23.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.290 [2024-11-20T08:16:23.319Z] =================================================================================================================== 00:25:46.290 [2024-11-20T08:16:23.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:46.290 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3433160 00:25:46.546 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3431765 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3431765 ']' 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3431765 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3431765 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3431765' 00:25:46.547 killing process with pid 3431765 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3431765 00:25:46.547 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3431765 00:25:46.804 00:25:46.804 real 0m15.995s 00:25:46.804 user 0m32.248s 00:25:46.804 sys 0m4.253s 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.804 ************************************ 00:25:46.804 END TEST nvmf_digest_clean 00:25:46.804 ************************************ 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:46.804 ************************************ 00:25:46.804 START TEST nvmf_digest_error 00:25:46.804 ************************************ 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3433699 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3433699 00:25:46.804 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3433699 ']' 00:25:46.805 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.805 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:46.805 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.805 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:46.805 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.805 [2024-11-20 09:16:23.730173] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:46.805 [2024-11-20 09:16:23.730273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.805 [2024-11-20 09:16:23.802232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.064 [2024-11-20 09:16:23.854170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.064 [2024-11-20 09:16:23.854220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.064 [2024-11-20 09:16:23.854247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.064 [2024-11-20 09:16:23.854257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.064 [2024-11-20 09:16:23.854266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.064 [2024-11-20 09:16:23.854854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.064 [2024-11-20 09:16:23.979532] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.064 09:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.064 null0 00:25:47.322 [2024-11-20 09:16:24.095076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.322 [2024-11-20 09:16:24.119274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3433841 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3433841 /var/tmp/bperf.sock 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3433841 ']' 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:47.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:47.322 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.322 [2024-11-20 09:16:24.170028] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:47.322 [2024-11-20 09:16:24.170108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433841 ] 00:25:47.322 [2024-11-20 09:16:24.237065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.322 [2024-11-20 09:16:24.294681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.579 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:47.579 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:47.579 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:47.579 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:47.837 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:47.837 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.837 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.837 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.837 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.837 09:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.094 nvme0n1 00:25:48.094 09:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:48.094 09:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.095 09:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:48.353 09:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.354 09:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:48.354 09:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:48.354 Running I/O for 2 seconds... 00:25:48.354 [2024-11-20 09:16:25.271393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.354 [2024-11-20 09:16:25.271444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.354 [2024-11-20 09:16:25.271465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.354 [2024-11-20 09:16:25.283371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.354 [2024-11-20 09:16:25.283402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.354 [2024-11-20 09:16:25.283434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.354 [2024-11-20 09:16:25.297360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.354 [2024-11-20 09:16:25.297390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.354 [2024-11-20 09:16:25.297421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.354 [2024-11-20 09:16:25.312761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.354 [2024-11-20 09:16:25.312791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.354 [2024-11-20 09:16:25.312830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.354 [2024-11-20 09:16:25.326734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.354 [2024-11-20 09:16:25.326762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.354 [2024-11-20 09:16:25.326792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.354 [2024-11-20 09:16:25.339612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.354 [2024-11-20 09:16:25.339655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.354 [2024-11-20 09:16:25.339672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.354 [2024-11-20 09:16:25.350535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.354 [2024-11-20 09:16:25.350563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.354 [2024-11-20 09:16:25.350593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.354 [2024-11-20 09:16:25.364173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.354 [2024-11-20 09:16:25.364200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.354 [2024-11-20 09:16:25.364230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.354 [2024-11-20 09:16:25.378677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.354 [2024-11-20 09:16:25.378707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.354 [2024-11-20 09:16:25.378724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.393652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.393682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.393700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.406851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.406882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.406899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.417117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.417148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.417165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.430931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.430964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.430981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.445551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.445595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.445611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.461645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.461676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.461692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.475269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.475300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.475325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.486556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.486600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.486616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.502510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.502541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.502574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.516004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.516035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.516052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.531926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.531957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.531974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.543331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.543362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.543379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.559372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.559400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.559431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.573606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.573638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.573655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.585200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.585227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.585256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.599232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.599259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.599289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.615386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.615413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.615443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.613 [2024-11-20 09:16:25.631413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.613 [2024-11-20 09:16:25.631440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.613 [2024-11-20 09:16:25.631470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.871 [2024-11-20 09:16:25.644244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.871 [2024-11-20 09:16:25.644271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.871 [2024-11-20 09:16:25.644301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.871 [2024-11-20 09:16:25.657059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.871 [2024-11-20 09:16:25.657086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.871 [2024-11-20 09:16:25.657115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.871 [2024-11-20 09:16:25.670886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.670913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.670948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.685651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.685681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.685711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.697454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.697485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.697502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.713299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.713334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.713364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.726956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.726987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.727003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.738214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.738240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.738270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.752634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.752680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.752697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.767182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.767209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.767240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.782009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.782038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.782070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.794243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.794271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.794308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.808644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.808671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.808700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.824573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.824616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.824632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.837275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.837323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.837339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.853195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.853222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.853252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.869409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.869439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.869456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.872 [2024-11-20 09:16:25.883759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:48.872 [2024-11-20 09:16:25.883788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.872 [2024-11-20 09:16:25.883821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:25.899611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:25.899642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:25.899659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:25.910894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:25.910920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:25.910957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:25.926597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:25.926643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:25.926659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:25.941033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:25.941072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:25.941103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:25.957565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:25.957609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:25.957625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:25.970768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:25.970813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:25.970831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:25.985470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:25.985500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:25.985517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:25.997296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:25.997331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:25.997362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.010848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.010893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.010910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.024963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.025009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.025025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.035388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.035425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.035458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.050244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.050273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.050311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.065847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.065878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.065894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.082514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.082544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.082576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.098658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.098686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.098717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.113656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.113687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.113704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.126255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.126285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.126308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.140395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.140439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.140456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.131 [2024-11-20 09:16:26.152068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.131 [2024-11-20 09:16:26.152096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.131 [2024-11-20 09:16:26.152126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.163660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.163703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.163718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.178373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.178401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.178432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.194056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.194087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.194104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.210067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.210095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.210125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.225694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.225724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.225741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.236691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.236719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.236750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 17980.00 IOPS, 70.23 MiB/s [2024-11-20T08:16:26.419Z] [2024-11-20 09:16:26.252431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.252487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.252505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.268255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.268285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.268325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.281972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.282002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.282026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.293869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.293897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.293928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.307903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.307948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.307964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.323005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.323035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.323052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.336729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.336761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.336777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.348742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.348775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.348792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.361250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.361293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.361319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.376520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.376549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.376589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.390533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.390564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.390582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-11-20 09:16:26.404874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.390 [2024-11-20 09:16:26.404905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-11-20 09:16:26.404922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.391 [2024-11-20 09:16:26.416026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.391 [2024-11-20 09:16:26.416056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.391 [2024-11-20 09:16:26.416086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.429085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.429129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.429147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.445931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.445963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.445980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.462102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.462131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.462161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.478805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.478836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.478853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.489106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.489135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.489167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.504657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.504688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.504704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.520805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.520834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.520871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.535240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.535271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.535288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.546988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.547033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.547050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.559255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.559300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.559328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.573169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.573199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.573215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.585671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.585717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.585734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.599879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.599910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.599927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.611537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.611566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.611598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.627013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.627045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.627062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.640921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.640958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.640996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.656678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.656709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.656726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.649 [2024-11-20 09:16:26.672277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.649 [2024-11-20 09:16:26.672314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.649 [2024-11-20 09:16:26.672333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.685670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.908 [2024-11-20 09:16:26.685700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.908 [2024-11-20 09:16:26.685717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.697021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.908 [2024-11-20 09:16:26.697048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.908 [2024-11-20 09:16:26.697078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.710946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.908 [2024-11-20 09:16:26.710978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.908 [2024-11-20 09:16:26.711006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.726528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.908 [2024-11-20 09:16:26.726560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.908 [2024-11-20 09:16:26.726577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.738029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.908 [2024-11-20 09:16:26.738057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.908 [2024-11-20 09:16:26.738088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.753326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.908 [2024-11-20 09:16:26.753357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.908 [2024-11-20 09:16:26.753373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.765183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.908 [2024-11-20 09:16:26.765212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.908 [2024-11-20 09:16:26.765244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.777571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.908 [2024-11-20 09:16:26.777601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.908 [2024-11-20 09:16:26.777618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.791193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.908 [2024-11-20 09:16:26.791221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.908 [2024-11-20 09:16:26.791251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.908 [2024-11-20 09:16:26.804424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.909 [2024-11-20 09:16:26.804454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.909 [2024-11-20 09:16:26.804472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.909 [2024-11-20 09:16:26.815603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.909 [2024-11-20 09:16:26.815631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.909 [2024-11-20 09:16:26.815647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.909 [2024-11-20 09:16:26.832081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.909 [2024-11-20 09:16:26.832112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.909 [2024-11-20 09:16:26.832129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.909 [2024-11-20 09:16:26.847729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.909 [2024-11-20 09:16:26.847757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.909 [2024-11-20 09:16:26.847788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.909 [2024-11-20 09:16:26.864422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.909 [2024-11-20 09:16:26.864452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.909 [2024-11-20 09:16:26.864484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.909 [2024-11-20 09:16:26.879822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.909 [2024-11-20 09:16:26.879849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.909 [2024-11-20 09:16:26.879887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.909 [2024-11-20 09:16:26.895221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.909 [2024-11-20 09:16:26.895252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.909 [2024-11-20 09:16:26.895269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.909 [2024-11-20 09:16:26.910367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.909 [2024-11-20 09:16:26.910396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.909 [2024-11-20 09:16:26.910428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.909 [2024-11-20 09:16:26.921477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:49.909 [2024-11-20 09:16:26.921505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.909 [2024-11-20 09:16:26.921537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:26.935668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:26.935698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:26.935715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:26.947659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:26.947687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:26.947718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:26.962150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:26.962181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:26.962199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:26.977712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:26.977740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:26.977769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:26.991756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:26.991785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:26.991802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.006292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.006352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.006371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.017691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.017734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.017749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.032000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.032031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.032048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.045822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.045849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.045879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.061661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.061688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.061718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.077932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.077976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.077992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.093416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.093444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.093475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.108136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.108181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.108198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.119510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.119538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.119569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.133471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.133501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.133532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.150014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.150043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.150074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.164867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.164896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.164927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.177298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.177335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.177366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.167 [2024-11-20 09:16:27.191966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.167 [2024-11-20 09:16:27.191997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.167 [2024-11-20 09:16:27.192013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.425 [2024-11-20 09:16:27.205814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.425 [2024-11-20 09:16:27.205855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.425 [2024-11-20 09:16:27.205871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.426 [2024-11-20 09:16:27.218204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.426 [2024-11-20 09:16:27.218235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-20 09:16:27.218252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.426 [2024-11-20 09:16:27.233965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.426 [2024-11-20 09:16:27.233996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-20 09:16:27.234012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.426 [2024-11-20 09:16:27.245442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5d6720) 00:25:50.426 [2024-11-20 09:16:27.245469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.426 [2024-11-20 09:16:27.245507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.426 18108.00 IOPS, 70.73 MiB/s 00:25:50.426 Latency(us) 00:25:50.426 [2024-11-20T08:16:27.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.426 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:50.426 nvme0n1 : 2.04 17781.74 69.46 0.00 0.00 7053.09 3470.98 54370.61 00:25:50.426 [2024-11-20T08:16:27.455Z] =================================================================================================================== 00:25:50.426 [2024-11-20T08:16:27.455Z] Total : 17781.74 69.46 0.00 0.00 7053.09 3470.98 54370.61 00:25:50.426 { 00:25:50.426 "results": [ 00:25:50.426 { 00:25:50.426 "job": "nvme0n1", 00:25:50.426 "core_mask": "0x2", 00:25:50.426 "workload": "randread", 00:25:50.426 "status": "finished", 00:25:50.426 "queue_depth": 128, 00:25:50.426 "io_size": 4096, 00:25:50.426 "runtime": 2.043894, 00:25:50.426 "iops": 17781.744063048278, 00:25:50.426 "mibps": 69.45993774628234, 00:25:50.426 "io_failed": 0, 00:25:50.426 "io_timeout": 0, 00:25:50.426 "avg_latency_us": 7053.092542209831, 00:25:50.426 "min_latency_us": 3470.9807407407407, 00:25:50.426 "max_latency_us": 54370.607407407406 00:25:50.426 } 00:25:50.426 ], 00:25:50.426 "core_count": 1 00:25:50.426 } 00:25:50.426 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:50.426 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:50.426 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:50.426 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:50.426 | .driver_specific 00:25:50.426 | .nvme_error 00:25:50.426 | .status_code 00:25:50.426 | .command_transient_transport_error' 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3433841 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3433841 ']' 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3433841 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3433841 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3433841' 00:25:50.684 killing process with pid 3433841 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3433841 00:25:50.684 Received shutdown signal, test time was about 2.000000 seconds 00:25:50.684 00:25:50.684 Latency(us) 00:25:50.684 [2024-11-20T08:16:27.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.684 [2024-11-20T08:16:27.713Z] =================================================================================================================== 00:25:50.684 [2024-11-20T08:16:27.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.684 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3433841 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3434249 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3434249 /var/tmp/bperf.sock 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3434249 ']' 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:50.942 09:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.942 [2024-11-20 09:16:27.913076] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:50.942 [2024-11-20 09:16:27.913161] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434249 ] 00:25:50.942 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:50.942 Zero copy mechanism will not be used. 00:25:51.200 [2024-11-20 09:16:27.980193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.200 [2024-11-20 09:16:28.036683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.200 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:51.200 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:51.200 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:51.200 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:51.458 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:51.458 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.458 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.458 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.458 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.458 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.025 nvme0n1 00:25:52.025 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:52.025 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.025 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.025 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.025 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:52.025 09:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.025 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.025 Zero copy mechanism will not be used. 00:25:52.025 Running I/O for 2 seconds... 00:25:52.025 [2024-11-20 09:16:28.876541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.876616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.876652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.882543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.882577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.882595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.888001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.888032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.888049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.894162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.894194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.894212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.901600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.901645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.901662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.908666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.908697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.908715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.912426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.912457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.912475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.919388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.919419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.919453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.924679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.924708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.924740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.930121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.930150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.930183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.935532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.935578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.935594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.941019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.941049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.941067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.946529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.946560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.946593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.951398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.951429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.951446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.954921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.954950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.954982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.960029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.960057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.960094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.965520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.965551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.965568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.970673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.970703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.970720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.025 [2024-11-20 09:16:28.975957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.025 [2024-11-20 09:16:28.975987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-11-20 09:16:28.976019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:28.981413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:28.981444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:28.981461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:28.986666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:28.986695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:28.986728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:28.991838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:28.991866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:28.991898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:28.997015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:28.997044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:28.997075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.002123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.002151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.002167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.007249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.007298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.007324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.012343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.012388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.012405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.017372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.017401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.017433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.022422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.022451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.022467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.027446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.027475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.027492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.032379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.032408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.032425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.037346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.037391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.037407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.042436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.042465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.042482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.026 [2024-11-20 09:16:29.047541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.026 [2024-11-20 09:16:29.047572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-11-20 09:16:29.047589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.293 [2024-11-20 09:16:29.052587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.293 [2024-11-20 09:16:29.052617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.293 [2024-11-20 09:16:29.052633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.293 [2024-11-20 09:16:29.057660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.293 [2024-11-20 09:16:29.057704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.293 [2024-11-20 09:16:29.057720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.293 [2024-11-20 09:16:29.062760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.293 [2024-11-20 09:16:29.062804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.293 [2024-11-20 09:16:29.062821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.293 [2024-11-20 09:16:29.067929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.293 [2024-11-20 09:16:29.067959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.293 [2024-11-20 09:16:29.067990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.293 [2024-11-20 09:16:29.072953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.293 [2024-11-20 09:16:29.072982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.293 [2024-11-20 09:16:29.073000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.293 [2024-11-20 09:16:29.078004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.293 [2024-11-20 09:16:29.078034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.294 [2024-11-20 09:16:29.078051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.294 [2024-11-20 09:16:29.082960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.294 [2024-11-20 09:16:29.082989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.294 [2024-11-20 09:16:29.083006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.294 [2024-11-20 09:16:29.087939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.294 [2024-11-20 09:16:29.087968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.294 [2024-11-20 09:16:29.088000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.294 [2024-11-20 09:16:29.093031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.294 [2024-11-20 09:16:29.093060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.294 [2024-11-20 09:16:29.093083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.294 [2024-11-20 09:16:29.098239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.294 [2024-11-20 09:16:29.098268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.294 [2024-11-20 09:16:29.098286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.294 [2024-11-20 09:16:29.103282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.294 [2024-11-20 09:16:29.103317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.294 [2024-11-20 09:16:29.103350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.294 [2024-11-20 09:16:29.108361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.294 [2024-11-20 09:16:29.108391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.294 [2024-11-20 09:16:29.108407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.294 [2024-11-20 09:16:29.113446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.294 [2024-11-20 09:16:29.113475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.295 [2024-11-20 09:16:29.113491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.295 [2024-11-20 09:16:29.118521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.295 [2024-11-20 09:16:29.118551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.295 [2024-11-20 09:16:29.118568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.295 [2024-11-20 09:16:29.123702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.295 [2024-11-20 09:16:29.123729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.295 [2024-11-20 09:16:29.123760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.295 [2024-11-20 09:16:29.128888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.295 [2024-11-20 09:16:29.128918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.295 [2024-11-20 09:16:29.128934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.295 [2024-11-20 09:16:29.133534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.295 [2024-11-20 09:16:29.133563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.295 [2024-11-20 09:16:29.133581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.295 [2024-11-20 09:16:29.136939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.295 [2024-11-20 09:16:29.136969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.295 [2024-11-20 09:16:29.136986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.295 [2024-11-20 09:16:29.141521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.295 [2024-11-20 09:16:29.141551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.295 [2024-11-20 09:16:29.141567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.295 [2024-11-20 09:16:29.146466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.296 [2024-11-20 09:16:29.146494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.296 [2024-11-20 09:16:29.146526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.296 [2024-11-20 09:16:29.151705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.296 [2024-11-20 09:16:29.151732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.296 [2024-11-20 09:16:29.151763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.296 [2024-11-20 09:16:29.156830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.296 [2024-11-20 09:16:29.156858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.296 [2024-11-20 09:16:29.156889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.296 [2024-11-20 09:16:29.161927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.296 [2024-11-20 09:16:29.161955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.296 [2024-11-20 09:16:29.161971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.301 [2024-11-20 09:16:29.166975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.302 [2024-11-20 09:16:29.167003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.302 [2024-11-20 09:16:29.167034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.302 [2024-11-20 09:16:29.172386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.302 [2024-11-20 09:16:29.172415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.302 [2024-11-20 09:16:29.172432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.302 [2024-11-20 09:16:29.177546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.302 [2024-11-20 09:16:29.177575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.302 [2024-11-20 09:16:29.177614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.302 [2024-11-20 09:16:29.182515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.302 [2024-11-20 09:16:29.182542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.302 [2024-11-20 09:16:29.182574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.302 [2024-11-20 09:16:29.187567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.305 [2024-11-20 09:16:29.187611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.305 [2024-11-20 09:16:29.187627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.305 [2024-11-20 09:16:29.192671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.305 [2024-11-20 09:16:29.192699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.192730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.197844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.197873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.197889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.202880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.202907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.202938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.207875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.207902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.207933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.212873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.212901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.212934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.217910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.217938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.217953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.222989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.223022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.223053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.227982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.228010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.228041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.232947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.232989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.233004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.238013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.238041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.238071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.243157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.243186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.243220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.248180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.306 [2024-11-20 09:16:29.248209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.306 [2024-11-20 09:16:29.248241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.306 [2024-11-20 09:16:29.253181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.253207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.253239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.258216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.258244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.258275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.263216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.263244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.263274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.268313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.268342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.268373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.273458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.273486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.273518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.278564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.278593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.278609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.283868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.283895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.283927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.288948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.288975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.289004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.294248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.294291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.294313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.299397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.299425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.299457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.304406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.304434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.304466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.309292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.309345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.309368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.307 [2024-11-20 09:16:29.314294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.307 [2024-11-20 09:16:29.314332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.307 [2024-11-20 09:16:29.314349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.319421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.319450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.319467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.324445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.324474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.324505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.329445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.329475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.329507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.334663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.334707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.334722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.339775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.339803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.339819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.344774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.344817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.344834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.349728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.349758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.349774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.353235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.353268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.353300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.357413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.357443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.357460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.361132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.361163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.361180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.364908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.364952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.364969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.567 [2024-11-20 09:16:29.369059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.567 [2024-11-20 09:16:29.369089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-11-20 09:16:29.369105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.374204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.374234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.374266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.380881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.380925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.380941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.389106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.389138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.389156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.395692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.395723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.395747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.400046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.400090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.400108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.406854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.406884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.406917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.413299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.413351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.413368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.418694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.418723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.418755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.423948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.423991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.424007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.429209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.429240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.429274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.434670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.434699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.434732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.439494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.439540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.439558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.444768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.444819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.444837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.450256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.450287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.450311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.455672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.455719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.455736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.461019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.461064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.461082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.466393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.466423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.466455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.471884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.471930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.471947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.477278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.477330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.477347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.482561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.482606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.482623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.487867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.487897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.487915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.493211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.493256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.493274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.498513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.568 [2024-11-20 09:16:29.498558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-11-20 09:16:29.498577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.568 [2024-11-20 09:16:29.503929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.503960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.503977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.509296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.509334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.509352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.514677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.514707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.514724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.519946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.519977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.519994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.525268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.525298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.525325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.530688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.530718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.530734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.535849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.535879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.535921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.541021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.541051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.541068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.546182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.546213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.546230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.551266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.551296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.551326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.556418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.556448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.556464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.560398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.560442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.560460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.564314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.564343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.564359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.568774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.568819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.568835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.573237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.573266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.573298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.576396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.576431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.576448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.581021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.581050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.581067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.585425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.585454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.585471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.569 [2024-11-20 09:16:29.588474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.569 [2024-11-20 09:16:29.588503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-11-20 09:16:29.588519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.592663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.592693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.592710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.596215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.596242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.596275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.599888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.599917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.599934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.604708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.604754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.604770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.609087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.609116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.609148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.615863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.615893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.615910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.621844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.621888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.621903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.627985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.628015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.628047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.633971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.633999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.634033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.828 [2024-11-20 09:16:29.638955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.828 [2024-11-20 09:16:29.638985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.828 [2024-11-20 09:16:29.639002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.643987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.644017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.644034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.649665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.649708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.649723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.655327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.655367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.655385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.660513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.660543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.660567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.665115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.665145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.665176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.670213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.670243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.670260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.675442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.675473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.675491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.679427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.679458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.679475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.684245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.684291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.684318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.691967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.691997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.692030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.699198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.699228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.699260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.705887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.705934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.705951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.712347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.712378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.712395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.718248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.718279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.718296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.724911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.724941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.724974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.730089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.730119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.730151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.734980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.735026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.735043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.739944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.739975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.739992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.745923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.745953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.745986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.749524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.749555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.749571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.753232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.753262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.753285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.757007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.757037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.757054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.760476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.760506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.760523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.764207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.764236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.764268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.768788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.768817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.768850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.773832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.773860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.829 [2024-11-20 09:16:29.773891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.829 [2024-11-20 09:16:29.778834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.829 [2024-11-20 09:16:29.778863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.778895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.783895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.783940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.783957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.788957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.789001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.789017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.794045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.794093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.794110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.799300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.799338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.799355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.804326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.804355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.804372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.809284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.809336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.809355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.814323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.814352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.814369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.819412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.819441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.819458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.824629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.824662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.824695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.829672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.829703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.829720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.834941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.834971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.835003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.840022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.840052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.840068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.845213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.845242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.845259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.830 [2024-11-20 09:16:29.850418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:52.830 [2024-11-20 09:16:29.850448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.830 [2024-11-20 09:16:29.850465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.855474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.855520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.855537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.860637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.860668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.860701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.865807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.865852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.865868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.089 6005.00 IOPS, 750.62 MiB/s [2024-11-20T08:16:30.118Z] [2024-11-20 09:16:29.872730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.872761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.872778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.878052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.878099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.878115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.883945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.883975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.884000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.889011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.889041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.889057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.894060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.894091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.894108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.899135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.899164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.899181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.904393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.904423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.904440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.908698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.908728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.908745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.911605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.911635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.911651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.916245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.916275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.916292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.922046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.922076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.922109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.929244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.929276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.929293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.936205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.936250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.936267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.942318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.942349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.942367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.947258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.947288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.947314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.951352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.951383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.951400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.956875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.089 [2024-11-20 09:16:29.956906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.089 [2024-11-20 09:16:29.956924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.089 [2024-11-20 09:16:29.963817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:29.963847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:29.963864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:29.970584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:29.970629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:29.970645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:29.978264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:29.978317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:29.978344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:29.984424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:29.984453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:29.984488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:29.992126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:29.992156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:29.992189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:29.999809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:29.999838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:29.999868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.007729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.007779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.007805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.013849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.013884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.013902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.018979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.019012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.019030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.024008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.024040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.024058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.029000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.029031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.029048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.034122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.034194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.034228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.039562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.039594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.039615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.044698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.044731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.044752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.049703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.049734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.049752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.054755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.054801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.054818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.059758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.059789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.059806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.064903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.064934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.064966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.070063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.070109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.070126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.075297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.075334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.075361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.080413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.080443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.080459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.085547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.085577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.085594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.090626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.090656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.090673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.095728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.095758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.095775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.100965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.100995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.101011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.105992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.106022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.106039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.090 [2024-11-20 09:16:30.111155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.090 [2024-11-20 09:16:30.111185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.090 [2024-11-20 09:16:30.111202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.349 [2024-11-20 09:16:30.116310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.349 [2024-11-20 09:16:30.116341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.349 [2024-11-20 09:16:30.116358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.349 [2024-11-20 09:16:30.121637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.349 [2024-11-20 09:16:30.121668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.349 [2024-11-20 09:16:30.121692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.349 [2024-11-20 09:16:30.127232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.349 [2024-11-20 09:16:30.127263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.349 [2024-11-20 09:16:30.127281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.349 [2024-11-20 09:16:30.132296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.349 [2024-11-20 09:16:30.132332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.349 [2024-11-20 09:16:30.132350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.349 [2024-11-20 09:16:30.137411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.349 [2024-11-20 09:16:30.137443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.349 [2024-11-20 09:16:30.137460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.349 [2024-11-20 09:16:30.143031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.349 [2024-11-20 09:16:30.143062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.349 [2024-11-20 09:16:30.143079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.349 [2024-11-20 09:16:30.147447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.147478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.147496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.154246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.154275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.154312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.160999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.161030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.161049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.168639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.168670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.168687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.176270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.176331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.176353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.183906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.183952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.183969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.191687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.191718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.191735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.199383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.199414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.199432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.207091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.207121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.207152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.214801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.214847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.214864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.222533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.222565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.222583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.230131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.230161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.230178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.237766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.237812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.237840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.245459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.245491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.245508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.253235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.253267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.253285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.260932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.260978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.260994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.268656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.268701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.268718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.275921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.275952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.275969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.280927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.280958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.280975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.285225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.285256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.285272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.289994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.290023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.290041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.294916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.294966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.294983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.299951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.299980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.299997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.304827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.304857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.304874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.309780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.309809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.309826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.314720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.314750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.314767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.319787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.350 [2024-11-20 09:16:30.319830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.350 [2024-11-20 09:16:30.319847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.350 [2024-11-20 09:16:30.324893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.324922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.324939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.329901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.329931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.329948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.334882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.334911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.334927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.339900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.339928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.339944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.344861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.344890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.344906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.349911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.349941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.349957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.354940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.354969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.355000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.360085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.360129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.360145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.365185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.365230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.365246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.370344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.370373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.370405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.351 [2024-11-20 09:16:30.375528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.351 [2024-11-20 09:16:30.375558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.351 [2024-11-20 09:16:30.375576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.610 [2024-11-20 09:16:30.380590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.610 [2024-11-20 09:16:30.380619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.610 [2024-11-20 09:16:30.380642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.610 [2024-11-20 09:16:30.385657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.610 [2024-11-20 09:16:30.385700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.610 [2024-11-20 09:16:30.385716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.610 [2024-11-20 09:16:30.390823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.610 [2024-11-20 09:16:30.390852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.610 [2024-11-20 09:16:30.390884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.610 [2024-11-20 09:16:30.395870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.610 [2024-11-20 09:16:30.395899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.610 [2024-11-20 09:16:30.395915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.610 [2024-11-20 09:16:30.400787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.610 [2024-11-20 09:16:30.400817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.610 [2024-11-20 09:16:30.400834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.610 [2024-11-20 09:16:30.405736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.610 [2024-11-20 09:16:30.405766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.610 [2024-11-20 09:16:30.405782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.610 [2024-11-20 09:16:30.410853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.410883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.410915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.416126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.416170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.416187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.421503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.421534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.421551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.426730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.426766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.426798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.431925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.431955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.431986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.437170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.437200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.437217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.442328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.442364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.442382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.447422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.447452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.447469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.452499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.452529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.452545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.457673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.457702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.457733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.462816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.462862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.462879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.467899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.467928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.467945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.472897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.472926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.472942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.477917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.477946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.477963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.482852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.482881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.482898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.487931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.487960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.487976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.493023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.493068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.493085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.498027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.498056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.498073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.503053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.503083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.503100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.508161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.508190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.508223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.513225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.513255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.513277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.518296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.518346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.518362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.523288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.523324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.523343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.528433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.528463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.528480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.533520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.533550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.533567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.538662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.538706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.538724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.611 [2024-11-20 09:16:30.543844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.611 [2024-11-20 09:16:30.543873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.611 [2024-11-20 09:16:30.543890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.549013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.549042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.549059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.553999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.554028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.554046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.559083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.559128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.559147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.564230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.564275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.564292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.569333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.569371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.569402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.574407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.574437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.574454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.579460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.579498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.579516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.584280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.584316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.584336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.587918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.587948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.587964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.591923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.591952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.591968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.596811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.596840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.596863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.600380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.600409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.600426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.604562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.604591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.604608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.608697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.608726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.608743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.612282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.612319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.612338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.615574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.615603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.615620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.619281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.619318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.619337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.623341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.623370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.623387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.626958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.626987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.627004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.630971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.631006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.631024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.612 [2024-11-20 09:16:30.635790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.612 [2024-11-20 09:16:30.635820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.612 [2024-11-20 09:16:30.635838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.871 [2024-11-20 09:16:30.640584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.871 [2024-11-20 09:16:30.640613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.871 [2024-11-20 09:16:30.640630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.871 [2024-11-20 09:16:30.645850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.871 [2024-11-20 09:16:30.645879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.871 [2024-11-20 09:16:30.645895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.871 [2024-11-20 09:16:30.650837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.871 [2024-11-20 09:16:30.650866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.871 [2024-11-20 09:16:30.650882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.871 [2024-11-20 09:16:30.655815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.871 [2024-11-20 09:16:30.655845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.871 [2024-11-20 09:16:30.655862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.871 [2024-11-20 09:16:30.661011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.871 [2024-11-20 09:16:30.661041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.871 [2024-11-20 09:16:30.661057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.871 [2024-11-20 09:16:30.666146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.871 [2024-11-20 09:16:30.666175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.871 [2024-11-20 09:16:30.666192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.871 [2024-11-20 09:16:30.671171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.871 [2024-11-20 09:16:30.671200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.871 [2024-11-20 09:16:30.671217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.676178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.676207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.676224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.681256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.681285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.681309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.686265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.686295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.686322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.691319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.691348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.691364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.696395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.696424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.696441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.701471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.701500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.701516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.706438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.706467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.706484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.711508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.711536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.711553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.716526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.716555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.716579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.721552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.721581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.721598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.726597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.726626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.726643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.731761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.731790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.731807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.737220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.737249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.737267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.742682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.742710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.742727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.747955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.747985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.748001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.753152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.753182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.753200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.758312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.758341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.758358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.763360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.763389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.763406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.769254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.769284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.769301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.774388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.774418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.774435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.779423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.779453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.779469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.784513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.784543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.784559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.789650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.789680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.789697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.794669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.794699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.794716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.799913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.799942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.799959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.805073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.805104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.872 [2024-11-20 09:16:30.805127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.872 [2024-11-20 09:16:30.810290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.872 [2024-11-20 09:16:30.810329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.810348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.815555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.815596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.815613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.820640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.820671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.820689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.825780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.825810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.825827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.830797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.830834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.830851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.835974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.836004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.836020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.841193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.841224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.841240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.846389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.846420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.846437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.851447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.851484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.851501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.854864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.854894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.854911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.861289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.861330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.861348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.873 [2024-11-20 09:16:30.868995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.869027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.869059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.873 5896.50 IOPS, 737.06 MiB/s [2024-11-20T08:16:30.902Z] [2024-11-20 09:16:30.877764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a6dc0) 00:25:53.873 [2024-11-20 09:16:30.877795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-11-20 09:16:30.877812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.873 00:25:53.873 Latency(us) 00:25:53.873 [2024-11-20T08:16:30.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.873 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:53.873 nvme0n1 : 2.01 5894.94 736.87 0.00 0.00 2709.00 782.79 12087.75 00:25:53.873 [2024-11-20T08:16:30.902Z] =================================================================================================================== 00:25:53.873 [2024-11-20T08:16:30.902Z] Total : 5894.94 736.87 0.00 0.00 2709.00 782.79 12087.75 00:25:53.873 { 00:25:53.873 "results": [ 00:25:53.873 { 00:25:53.873 "job": "nvme0n1", 00:25:53.873 "core_mask": "0x2", 00:25:53.873 "workload": "randread", 00:25:53.873 "status": "finished", 00:25:53.873 "queue_depth": 16, 00:25:53.873 "io_size": 131072, 00:25:53.873 "runtime": 2.005618, 00:25:53.873 "iops": 5894.9411104208275, 00:25:53.873 "mibps": 736.8676388026034, 00:25:53.873 "io_failed": 0, 00:25:53.873 "io_timeout": 0, 00:25:53.873 "avg_latency_us": 2709.002664361054, 00:25:53.873 "min_latency_us": 782.7911111111111, 00:25:53.873 "max_latency_us": 12087.75111111111 00:25:53.873 } 00:25:53.873 ], 00:25:53.873 "core_count": 1 00:25:53.873 } 00:25:53.873 09:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:53.873 09:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:53.873 09:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:53.873 | .driver_specific 00:25:53.873 | .nvme_error 00:25:53.873 | .status_code 00:25:53.873 | .command_transient_transport_error' 00:25:53.873 09:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 382 > 0 )) 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3434249 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3434249 ']' 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3434249 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3434249 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3434249' 00:25:54.439 killing process with pid 3434249 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3434249 00:25:54.439 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.439 00:25:54.439 Latency(us) 00:25:54.439 [2024-11-20T08:16:31.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.439 [2024-11-20T08:16:31.468Z] =================================================================================================================== 00:25:54.439 [2024-11-20T08:16:31.468Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3434249 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3434654 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3434654 /var/tmp/bperf.sock 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3434654 ']' 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:54.439 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:54.697 [2024-11-20 09:16:31.491463] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:54.697 [2024-11-20 09:16:31.491550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434654 ] 00:25:54.697 [2024-11-20 09:16:31.556453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.697 [2024-11-20 09:16:31.612774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.697 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:54.697 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:54.697 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:54.697 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.262 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:55.262 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.262 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.262 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.262 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.262 09:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.520 nvme0n1 00:25:55.520 09:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:55.520 09:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.520 09:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.520 09:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.520 09:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:55.520 09:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:55.520 Running I/O for 2 seconds... 00:25:55.520 [2024-11-20 09:16:32.519945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f31b8 00:25:55.520 [2024-11-20 09:16:32.521186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.520 [2024-11-20 09:16:32.521229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:55.520 [2024-11-20 09:16:32.532144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166dece0 00:25:55.520 [2024-11-20 09:16:32.532914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.520 [2024-11-20 09:16:32.532945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:55.520 [2024-11-20 09:16:32.543582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166df550 00:25:55.520 [2024-11-20 09:16:32.544175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.520 [2024-11-20 09:16:32.544208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.557946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166ebb98 00:25:55.779 [2024-11-20 09:16:32.559616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.559646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.569048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166ec408 00:25:55.779 [2024-11-20 09:16:32.570415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.570446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.580893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.779 [2024-11-20 09:16:32.581181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.581228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.594959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.779 [2024-11-20 09:16:32.595256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.595299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.608989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.779 [2024-11-20 09:16:32.609243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.609288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.622905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.779 [2024-11-20 09:16:32.623141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.623170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.637064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.779 [2024-11-20 09:16:32.637353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.637399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.651264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.779 [2024-11-20 09:16:32.651581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.651610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.665217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.779 [2024-11-20 09:16:32.665577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.665608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.679222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.779 [2024-11-20 09:16:32.679513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.779 [2024-11-20 09:16:32.679545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.779 [2024-11-20 09:16:32.693237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.779 [2024-11-20 09:16:32.693574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.780 [2024-11-20 09:16:32.693606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.780 [2024-11-20 09:16:32.707017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.780 [2024-11-20 09:16:32.707325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.780 [2024-11-20 09:16:32.707355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.780 [2024-11-20 09:16:32.721180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.780 [2024-11-20 09:16:32.721516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.780 [2024-11-20 09:16:32.721546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.780 [2024-11-20 09:16:32.735059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.780 [2024-11-20 09:16:32.735346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.780 [2024-11-20 09:16:32.735391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.780 [2024-11-20 09:16:32.749136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.780 [2024-11-20 09:16:32.749421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.780 [2024-11-20 09:16:32.749454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.780 [2024-11-20 09:16:32.763442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.780 [2024-11-20 09:16:32.763707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.780 [2024-11-20 09:16:32.763753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.780 [2024-11-20 09:16:32.777534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.780 [2024-11-20 09:16:32.777850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.780 [2024-11-20 09:16:32.777880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.780 [2024-11-20 09:16:32.791053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.780 [2024-11-20 09:16:32.791344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.780 [2024-11-20 09:16:32.791382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.780 [2024-11-20 09:16:32.804767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:55.780 [2024-11-20 09:16:32.805083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.780 [2024-11-20 09:16:32.805113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.038 [2024-11-20 09:16:32.818513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.818757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.818802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.832524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.832789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.832821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.846487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.846733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.846775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.860502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.860799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.860843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.874513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.874800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.874842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.888552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.888791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.888820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.902677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.902983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.903012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.916493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.916842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.916872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.930582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.930911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.930957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.944486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.944830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.944860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.958706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.959063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.959106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.973032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.973341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.973370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:32.987426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:32.987725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:32.987755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:33.001681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:33.001966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:33.002013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:33.015930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:33.016285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:33.016336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:33.030268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:33.030529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:33.030558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:33.044253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:33.044770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:33.044798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.039 [2024-11-20 09:16:33.058645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.039 [2024-11-20 09:16:33.058994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.039 [2024-11-20 09:16:33.059039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.072537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.072885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.072917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.086349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.086664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.086694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.100762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.101122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.101152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.115092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.115442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.115475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.129447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.129783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.129814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.143839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.144130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.144177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.157989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.158280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.158333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.172341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.172578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.172611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.186389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.186746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.186775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.200728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.201033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.201079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.214995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.215253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.215298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.229075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.229386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.229415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.243238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.243483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.243515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.257202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.257545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.257574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.271460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.271715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.271747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.285687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.285970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.286004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.298 [2024-11-20 09:16:33.299431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.298 [2024-11-20 09:16:33.299695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.298 [2024-11-20 09:16:33.299726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.299 [2024-11-20 09:16:33.313441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.299 [2024-11-20 09:16:33.313709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.299 [2024-11-20 09:16:33.313741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.327248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.327472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.327501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.340957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.341230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.341274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.355162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.355440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.355470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.369498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.369858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.369890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.383887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.384229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.384260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.398116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.398375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.398404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.412510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.412840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.412869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.426788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.427111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.427157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.441146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.441420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.441453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.455463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.455744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.455789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.469875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.470164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.470210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.484086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.484452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.484484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.498560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.498891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.498940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 18243.00 IOPS, 71.26 MiB/s [2024-11-20T08:16:33.586Z] [2024-11-20 09:16:33.512913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.513236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.513282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.527120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.527417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.527447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.541322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.541598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.541630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.555163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.555442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.555473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.569426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.557 [2024-11-20 09:16:33.569758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.557 [2024-11-20 09:16:33.569788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.557 [2024-11-20 09:16:33.583531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.583845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.583874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.597434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.597695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.597740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.611725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.612058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.612102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.626109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.626469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.626499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.640430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.640793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.640825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.654732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.655000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.655050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.668923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.669174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.669202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.683128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.683425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.683454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.697420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.697775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.697803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.711651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.711921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.711964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.725786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.726159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.726188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.740181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.740485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.740517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.754511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.754826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.754856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.769054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.769344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.769372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.783209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.783521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.783552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.797688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.797944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.797972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.811571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.811894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.811922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.825708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.815 [2024-11-20 09:16:33.826079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.815 [2024-11-20 09:16:33.826110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:56.815 [2024-11-20 09:16:33.840014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:56.816 [2024-11-20 09:16:33.840222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.816 [2024-11-20 09:16:33.840250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.853821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.854148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.854176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.868123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.868494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.868524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.882485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.882844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.882873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.896620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.896909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.896955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.911005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.911296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.911347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.925186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.925491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.925521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.939411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.939725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.939755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.953632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.954003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.954033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.967770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.968055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.968098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.981874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.982163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.982208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:33.995810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:33.996107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:33.996151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:34.009891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:34.010158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:34.010203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:34.023917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:34.024214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:34.024263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:34.038019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:34.038326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:34.038372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:34.052034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:34.052351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:34.052381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:34.065422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:34.065673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:34.065700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:34.079541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:34.079837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:34.079880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.073 [2024-11-20 09:16:34.093446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.073 [2024-11-20 09:16:34.093693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.073 [2024-11-20 09:16:34.093721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.106940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.107173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.107202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.120914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.121203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.121249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.134812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.135108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.135153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.148724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.149026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.149072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.162736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.163025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.163070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.176713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.176955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.176999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.190537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.190828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.190871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.204239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.204472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.204501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.218191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.218545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.218575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.232378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.232643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.232688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.246592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.246927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.246971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.260897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.261190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.261234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.275078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.275370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.275400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.289391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.289634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.289680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.303785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.304140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.304170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.317807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.318151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.318180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.331999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.332283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.332338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.330 [2024-11-20 09:16:34.346263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.330 [2024-11-20 09:16:34.346596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.330 [2024-11-20 09:16:34.346625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.360124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.360379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.360408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.374361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.374654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.374683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.388502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.388813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.388842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.402702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.403001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.403030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.417113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.417440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.417469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.431453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.431786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.431815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.445706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.446002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.446048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.460012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.460285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.460324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.474268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.474609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.474656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.488563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.488862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.488909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 [2024-11-20 09:16:34.502845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.503127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.503160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 18158.00 IOPS, 70.93 MiB/s [2024-11-20T08:16:34.630Z] [2024-11-20 09:16:34.516907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x627d50) with pdu=0x2000166f8618 00:25:57.601 [2024-11-20 09:16:34.517192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:57.601 [2024-11-20 09:16:34.517221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:57.601 00:25:57.601 Latency(us) 00:25:57.601 [2024-11-20T08:16:34.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.601 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:57.601 nvme0n1 : 2.01 18159.29 70.93 0.00 0.00 7031.94 2730.67 15534.46 00:25:57.602 [2024-11-20T08:16:34.631Z] =================================================================================================================== 00:25:57.602 [2024-11-20T08:16:34.631Z] Total : 18159.29 70.93 0.00 0.00 7031.94 2730.67 15534.46 00:25:57.602 { 00:25:57.602 "results": [ 00:25:57.602 { 00:25:57.602 "job": "nvme0n1", 00:25:57.602 "core_mask": "0x2", 00:25:57.602 "workload": "randwrite", 00:25:57.602 "status": "finished", 00:25:57.602 "queue_depth": 128, 00:25:57.602 "io_size": 4096, 00:25:57.602 "runtime": 2.008669, 00:25:57.602 "iops": 18159.288563720555, 00:25:57.602 "mibps": 70.93472095203342, 00:25:57.602 "io_failed": 0, 00:25:57.602 "io_timeout": 0, 00:25:57.602 "avg_latency_us": 7031.940792728246, 00:25:57.602 "min_latency_us": 2730.6666666666665, 00:25:57.602 "max_latency_us": 15534.45925925926 00:25:57.602 } 00:25:57.602 ], 00:25:57.602 "core_count": 1 00:25:57.602 } 00:25:57.602 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:57.602 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:57.602 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:57.602 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:57.602 | .driver_specific 00:25:57.602 | .nvme_error 00:25:57.602 | .status_code 00:25:57.602 | .command_transient_transport_error' 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3434654 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3434654 ']' 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3434654 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3434654 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3434654' 00:25:57.867 killing process with pid 3434654 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3434654 00:25:57.867 Received shutdown signal, test time was about 2.000000 seconds 00:25:57.867 00:25:57.867 Latency(us) 00:25:57.867 [2024-11-20T08:16:34.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.867 [2024-11-20T08:16:34.896Z] =================================================================================================================== 00:25:57.867 [2024-11-20T08:16:34.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:57.867 09:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3434654 00:25:58.124 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3435064 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3435064 /var/tmp/bperf.sock 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3435064 ']' 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:58.125 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.125 [2024-11-20 09:16:35.099493] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:25:58.125 [2024-11-20 09:16:35.099580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435064 ] 00:25:58.125 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:58.125 Zero copy mechanism will not be used. 00:25:58.382 [2024-11-20 09:16:35.165712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.382 [2024-11-20 09:16:35.219901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.382 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:58.382 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:58.382 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:58.382 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:58.640 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:58.640 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.640 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.640 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.640 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.640 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.205 nvme0n1 00:25:59.205 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:59.205 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.205 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.205 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.205 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:59.205 09:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:59.205 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:59.205 Zero copy mechanism will not be used. 00:25:59.205 Running I/O for 2 seconds... 00:25:59.205 [2024-11-20 09:16:36.086287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.086407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.086450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.092127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.092219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.092250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.097350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.097424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.097452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.102286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.102376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.102405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.107240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.107341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.107371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.112345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.112460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.112489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.118036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.118222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.118251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.124429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.124600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.124629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.130151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.130240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.130267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.136412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.136604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.136633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.142814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.142944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.142973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.149071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.149257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.149286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.155398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.155573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.155602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.162626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.205 [2024-11-20 09:16:36.162812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.205 [2024-11-20 09:16:36.162840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.205 [2024-11-20 09:16:36.169465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.206 [2024-11-20 09:16:36.169650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.206 [2024-11-20 09:16:36.169680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.206 [2024-11-20 09:16:36.176504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.206 [2024-11-20 09:16:36.176615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.206 [2024-11-20 09:16:36.176651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.206 [2024-11-20 09:16:36.183767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.206 [2024-11-20 09:16:36.183913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.206 [2024-11-20 09:16:36.183943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.206 [2024-11-20 09:16:36.191114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.206 [2024-11-20 09:16:36.191250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.206 [2024-11-20 09:16:36.191279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.206 [2024-11-20 09:16:36.198187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.206 [2024-11-20 09:16:36.198379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.206 [2024-11-20 09:16:36.198408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.206 [2024-11-20 09:16:36.205773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.206 [2024-11-20 09:16:36.205903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.206 [2024-11-20 09:16:36.205932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.206 [2024-11-20 09:16:36.213234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.206 [2024-11-20 09:16:36.213418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.206 [2024-11-20 09:16:36.213448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.206 [2024-11-20 09:16:36.219875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.206 [2024-11-20 09:16:36.219963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.206 [2024-11-20 09:16:36.219992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.206 [2024-11-20 09:16:36.226955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.206 [2024-11-20 09:16:36.227055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.206 [2024-11-20 09:16:36.227084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.233539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.233612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.233640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.239741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.239842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.239871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.245503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.245668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.245696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.251887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.251978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.252007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.257482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.257593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.257622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.263566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.263768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.263797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.269923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.270120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.270149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.276256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.276451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.276480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.282814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.283005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.283034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.289919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.290100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.290129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.296777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.296853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.296881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.303826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.303965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.303994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.311046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.311145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.311174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.318211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.318323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.318350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.324086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.324194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.324229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.329507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.329584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.329611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.335771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.335897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.335926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.340702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.340781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.340808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.345395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.345480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.345515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.465 [2024-11-20 09:16:36.350077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.465 [2024-11-20 09:16:36.350146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.465 [2024-11-20 09:16:36.350173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.354793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.354878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.354905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.359452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.359526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.359553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.364064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.364133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.364159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.368724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.368806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.368833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.373345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.373426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.373452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.378003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.378073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.378100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.382701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.382773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.382800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.387469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.387547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.387580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.392107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.392178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.392204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.396668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.396744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.396771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.401241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.401332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.401358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.406360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.406505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.406534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.412337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.412493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.412521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.418719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.418919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.418948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.425468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.425636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.425665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.432198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.432314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.432343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.437639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.437714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.437741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.442264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.442370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.442399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.446780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.446863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.446892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.451451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.451531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.451557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.456191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.456269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.456296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.466 [2024-11-20 09:16:36.460883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.466 [2024-11-20 09:16:36.460967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.466 [2024-11-20 09:16:36.460996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.467 [2024-11-20 09:16:36.465586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.467 [2024-11-20 09:16:36.465660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.467 [2024-11-20 09:16:36.465688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.467 [2024-11-20 09:16:36.470259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.467 [2024-11-20 09:16:36.470358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.467 [2024-11-20 09:16:36.470387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.467 [2024-11-20 09:16:36.474916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.467 [2024-11-20 09:16:36.474994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.467 [2024-11-20 09:16:36.475021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.467 [2024-11-20 09:16:36.479529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.467 [2024-11-20 09:16:36.479602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.467 [2024-11-20 09:16:36.479629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.467 [2024-11-20 09:16:36.484389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.467 [2024-11-20 09:16:36.484472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.467 [2024-11-20 09:16:36.484499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.467 [2024-11-20 09:16:36.488988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.467 [2024-11-20 09:16:36.489082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.467 [2024-11-20 09:16:36.489110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.493632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.493720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.493747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.498403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.498483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.498510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.503331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.503470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.503498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.509089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.509266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.509295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.515348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.515516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.515545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.522270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.522404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.522440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.529201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.529327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.529357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.535874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.535969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.535997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.542516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.542675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.542704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.549012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.549141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.549170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.555885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.556063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.556092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.562328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.562443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.562472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.569112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.569315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.569345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.576148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.576252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.576280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.726 [2024-11-20 09:16:36.582682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.726 [2024-11-20 09:16:36.582917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.726 [2024-11-20 09:16:36.582945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.589115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.589221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.589250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.595782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.595949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.595978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.602135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.602245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.602274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.608943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.609063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.609092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.615640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.615736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.615764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.622410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.622560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.622588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.629045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.629219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.629249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.635866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.636037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.636066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.642645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.642840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.642869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.649168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.649287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.649324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.656132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.656309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.656339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.662928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.663112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.663141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.669633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.669769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.669798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.676380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.676546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.676574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.683330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.683567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.683595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.690286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.690444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.690473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.697049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.697217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.697253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.703854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.704011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.704040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.710656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.710823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.710852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.717615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.717766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.717794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.723930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.724080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.724108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.730462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.730623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.730651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.727 [2024-11-20 09:16:36.737240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.727 [2024-11-20 09:16:36.737429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.727 [2024-11-20 09:16:36.737458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.728 [2024-11-20 09:16:36.743781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.728 [2024-11-20 09:16:36.743988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.728 [2024-11-20 09:16:36.744017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.728 [2024-11-20 09:16:36.750591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.728 [2024-11-20 09:16:36.750716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.728 [2024-11-20 09:16:36.750746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.987 [2024-11-20 09:16:36.757372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.987 [2024-11-20 09:16:36.757582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.987 [2024-11-20 09:16:36.757611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.987 [2024-11-20 09:16:36.763905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.987 [2024-11-20 09:16:36.764030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.987 [2024-11-20 09:16:36.764059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.987 [2024-11-20 09:16:36.769052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.987 [2024-11-20 09:16:36.769158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.987 [2024-11-20 09:16:36.769187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.987 [2024-11-20 09:16:36.773904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.987 [2024-11-20 09:16:36.774058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.987 [2024-11-20 09:16:36.774086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.987 [2024-11-20 09:16:36.778890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.987 [2024-11-20 09:16:36.779004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.987 [2024-11-20 09:16:36.779032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.987 [2024-11-20 09:16:36.783672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.987 [2024-11-20 09:16:36.783775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.783804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.788848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.788990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.789018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.793781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.793925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.793953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.798938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.799049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.799078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.804271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.804369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.804398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.809009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.809082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.809109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.813634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.813706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.813732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.818337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.818423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.818450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.823015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.823096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.823122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.827672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.827758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.827786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.832613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.832694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.832720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.837395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.837488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.837515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.842031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.842112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.842146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.846707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.846779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.846805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.851248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.851338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.851365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.855806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.855877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.855904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.860407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.860495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.860522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.865162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.865295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.865331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.870541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.870733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.870761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.876452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.876639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.876667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.883109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.883225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.883253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.888940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.889088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.889116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.893437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.893518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.893545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.898071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.898146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.898172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.902716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.902854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.902882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.907708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.907784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.907811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.912399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.988 [2024-11-20 09:16:36.912471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.988 [2024-11-20 09:16:36.912497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.988 [2024-11-20 09:16:36.917237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.917363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.917391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.922779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.922963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.922991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.928568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.928713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.928741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.934628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.934795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.934824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.941176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.941390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.941420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.947814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.947894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.947922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.954836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.954984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.955014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.960808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.960993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.961022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.966857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.967034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.967063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.971784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.971874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.971902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.977670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.977844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.977872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.982982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.983082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.983121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.987762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.987872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.987900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.992498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.992601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.992628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:36.997233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:36.997354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:36.997382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:37.001871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:37.001978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:37.002005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:37.006545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:37.006691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:37.006720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.989 [2024-11-20 09:16:37.011226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:25:59.989 [2024-11-20 09:16:37.011323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.989 [2024-11-20 09:16:37.011350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.016008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.016144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.016173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.020877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.020963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.020989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.026600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.026679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.026706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.032338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.032431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.032458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.037317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.037412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.037439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.043119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.043317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.043346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.048871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.048970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.048998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.053694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.053807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.053835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.058470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.058558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.058585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.063239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.063354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.063383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.068792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.068991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.069020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.074447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.074530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.074557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.079759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.079932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.079960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.249 5392.00 IOPS, 674.00 MiB/s [2024-11-20T08:16:37.278Z] [2024-11-20 09:16:37.086985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.087075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.087103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.092591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.092749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.092778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.098917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.099094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.099122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.105289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.105474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.105502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.249 [2024-11-20 09:16:37.111688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.249 [2024-11-20 09:16:37.111849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.249 [2024-11-20 09:16:37.111878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.118117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.118228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.118259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.124576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.124727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.124760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.131129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.131267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.131296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.137572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.137735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.137764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.143944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.144139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.144167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.150245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.150436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.150465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.156625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.156827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.156855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.163073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.163171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.163200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.169218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.169393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.169422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.175503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.175691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.175719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.181775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.181945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.181974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.188067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.188266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.188295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.194575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.194752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.194781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.200503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.200603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.200634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.205741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.205845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.205874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.211071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.211179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.211207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.216132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.216236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.216263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.221122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.221219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.221246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.226360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.226555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.226584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.232576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.232681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.232709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.237509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.237644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.250 [2024-11-20 09:16:37.237673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.250 [2024-11-20 09:16:37.242774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.250 [2024-11-20 09:16:37.242864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.251 [2024-11-20 09:16:37.242892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.251 [2024-11-20 09:16:37.248396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.251 [2024-11-20 09:16:37.248469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.251 [2024-11-20 09:16:37.248496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.251 [2024-11-20 09:16:37.254336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.251 [2024-11-20 09:16:37.254408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.251 [2024-11-20 09:16:37.254435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.251 [2024-11-20 09:16:37.260405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.251 [2024-11-20 09:16:37.260474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.251 [2024-11-20 09:16:37.260501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.251 [2024-11-20 09:16:37.265837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.251 [2024-11-20 09:16:37.265925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.251 [2024-11-20 09:16:37.265952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.251 [2024-11-20 09:16:37.271624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.251 [2024-11-20 09:16:37.271729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.251 [2024-11-20 09:16:37.271758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.278109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.278200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.278234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.284912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.284989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.285016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.290413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.290550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.290578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.295382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.295488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.295516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.300275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.300429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.300458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.305485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.305591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.305619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.311643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.311848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.311877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.316900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.317030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.317059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.321977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.322070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.322097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.327126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.327260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.327294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.332045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.332216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.332245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.338463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.338619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.338648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.345147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.345260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.345288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.351926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.352093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.352122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.359103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.359320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.359350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.366280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.366477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.366506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.373541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.373611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.373639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.380007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.380081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.380108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.385292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.385393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.385420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.390207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.390289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.390325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.395200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.395268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.395295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.400312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.400394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.400421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.405439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.405532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.405559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.410613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.410699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.410726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.415962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.546 [2024-11-20 09:16:37.416043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.546 [2024-11-20 09:16:37.416070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.546 [2024-11-20 09:16:37.421085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.421169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.421195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.426133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.426215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.426241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.431228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.431317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.431344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.436821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.436949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.436978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.443144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.443340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.443369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.449601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.449745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.449773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.456005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.456196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.456224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.462378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.462572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.462600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.468874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.469045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.469073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.475335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.475413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.475440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.482384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.482558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.482592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.488669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.488836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.488865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.495243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.495417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.495446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.501864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.502019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.502047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.507726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.507843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.507872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.513904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.514053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.514081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.520034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.520192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.520220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.526229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.526362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.526391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.532448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.532598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.532626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.538369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.538538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.538567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.547 [2024-11-20 09:16:37.544575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.547 [2024-11-20 09:16:37.544708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.547 [2024-11-20 09:16:37.544737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.830 [2024-11-20 09:16:37.551484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.830 [2024-11-20 09:16:37.551583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.830 [2024-11-20 09:16:37.551610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.830 [2024-11-20 09:16:37.557550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.557633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.557661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.562413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.562499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.562525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.567161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.567242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.567269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.571964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.572045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.572071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.576740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.576816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.576842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.581311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.581401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.581427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.585946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.586014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.586041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.590858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.590947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.590974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.595716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.595805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.595832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.600514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.600605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.600632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.605205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.605299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.605335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.609788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.609878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.609905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.615034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.615107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.615134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.620322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.620395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.620422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.625745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.625870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.625905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.631596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.631672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.631699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.636411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.636499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.636526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.641091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.641165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.641191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.645910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.646049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.646078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.651017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.651113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.651140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.831 [2024-11-20 09:16:37.655808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.831 [2024-11-20 09:16:37.655878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.831 [2024-11-20 09:16:37.655905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.660602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.660675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.660701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.665366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.665438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.665465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.670109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.670199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.670226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.674839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.674921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.674949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.679626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.679699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.679726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.684468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.684549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.684576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.689273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.689385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.689413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.694294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.694385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.694411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.699087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.699178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.699205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.703866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.703945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.703972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.708608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.708689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.708716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.713388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.713480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.713506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.718241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.718327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.718353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.723195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.723318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.723345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.728017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.728105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.728132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.732899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.732973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.733000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.737604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.737686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.737713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.742564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.742641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.742667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.747358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.747449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.747476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.752249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.752336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.752370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.757057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.757129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.757155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.761705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.761778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.761806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.766526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.766594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.766621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.771344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.771425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.771451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.776091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.776165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.776191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.780779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.832 [2024-11-20 09:16:37.780872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.832 [2024-11-20 09:16:37.780898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.832 [2024-11-20 09:16:37.785605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.785695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.785722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.790186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.790261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.790287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.794846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.794939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.794966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.799400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.799488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.799514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.804081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.804151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.804177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.808710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.808796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.808822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.813387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.813474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.813500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.818075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.818150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.818176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.822621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.822694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.822720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.827387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.827463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.827489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.832133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.832215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.832242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.836746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.836833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.836860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.841498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.841570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.841597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.846272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.846380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.846410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.851019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.851097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.851124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.833 [2024-11-20 09:16:37.855716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:00.833 [2024-11-20 09:16:37.855820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.833 [2024-11-20 09:16:37.855848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.860367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.860466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.860494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.864892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.864968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.864994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.869834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.869918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.869944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.874521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.874628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.874663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.879263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.879498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.879527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.884749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.884938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.884966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.890461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.890652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.890681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.896775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.896916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.896944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.903280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.903440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.903469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.909172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.909272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.909308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.913668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.913763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.913805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.918257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.918348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.918376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.922753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.922837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.922864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.927130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.927206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.927232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.931774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.931845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.931871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.936366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.936467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.936496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.940741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.940826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.940853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.945150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.945246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.945274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.949794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.949887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.949914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.954356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.954450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.954477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.958818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.958893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.958919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.963197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.963279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.963313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.967470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.967553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.967579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.971858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.971933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.971960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.976990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.092 [2024-11-20 09:16:37.977111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.092 [2024-11-20 09:16:37.977139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.092 [2024-11-20 09:16:37.981979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:37.982157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:37.982186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:37.987964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:37.988063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:37.988095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:37.992486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:37.992579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:37.992606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:37.996712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:37.996797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:37.996824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.000982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.001077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.001112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.005989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.006196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.006224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.011350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.011527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.011555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.016418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.016603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.016631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.021607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.021809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.021838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.026701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.026891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.026919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.031771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.031976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.032005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.036993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.037192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.037221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.042205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.042401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.042430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.047746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.047964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.047994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.054009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.054203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.054232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.059668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.059811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.059839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.064219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.064291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.064326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.068582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.068679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.068709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.072741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.072809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.072835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.076913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.076996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.077023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.081056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.081140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.081166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:01.093 [2024-11-20 09:16:38.085252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x628090) with pdu=0x2000166ff3c8 00:26:01.093 [2024-11-20 09:16:38.085341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.093 [2024-11-20 09:16:38.085367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.093 5618.00 IOPS, 702.25 MiB/s 00:26:01.093 Latency(us) 00:26:01.093 [2024-11-20T08:16:38.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.093 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:01.093 nvme0n1 : 2.00 5616.19 702.02 0.00 0.00 2842.11 1881.13 8301.23 00:26:01.093 [2024-11-20T08:16:38.122Z] =================================================================================================================== 00:26:01.093 [2024-11-20T08:16:38.122Z] Total : 5616.19 702.02 0.00 0.00 2842.11 1881.13 8301.23 00:26:01.093 { 00:26:01.093 "results": [ 00:26:01.093 { 00:26:01.093 "job": "nvme0n1", 00:26:01.093 "core_mask": "0x2", 00:26:01.093 "workload": "randwrite", 00:26:01.093 "status": "finished", 00:26:01.093 "queue_depth": 16, 00:26:01.093 "io_size": 131072, 00:26:01.093 "runtime": 2.003494, 00:26:01.093 "iops": 5616.188518657905, 00:26:01.093 "mibps": 702.0235648322381, 00:26:01.093 "io_failed": 0, 00:26:01.093 "io_timeout": 0, 00:26:01.093 "avg_latency_us": 2842.106600308093, 00:26:01.093 "min_latency_us": 1881.125925925926, 00:26:01.093 "max_latency_us": 8301.226666666667 00:26:01.093 } 00:26:01.093 ], 00:26:01.093 "core_count": 1 00:26:01.093 } 00:26:01.093 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:01.093 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:01.094 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:01.094 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:01.094 | .driver_specific 00:26:01.094 | .nvme_error 00:26:01.094 | .status_code 00:26:01.094 | .command_transient_transport_error' 00:26:01.351 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 363 > 0 )) 00:26:01.351 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3435064 00:26:01.351 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3435064 ']' 00:26:01.351 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3435064 00:26:01.351 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:01.609 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:01.609 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3435064 00:26:01.609 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:01.609 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:01.609 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3435064' 00:26:01.609 killing process with pid 3435064 00:26:01.609 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3435064 00:26:01.609 Received shutdown signal, test time was about 2.000000 seconds 00:26:01.609 00:26:01.609 Latency(us) 00:26:01.609 [2024-11-20T08:16:38.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.609 [2024-11-20T08:16:38.638Z] =================================================================================================================== 00:26:01.609 [2024-11-20T08:16:38.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.609 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3435064 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3433699 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3433699 ']' 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3433699 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3433699 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3433699' 00:26:01.868 killing process with pid 3433699 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3433699 00:26:01.868 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3433699 00:26:02.127 00:26:02.127 real 0m15.233s 00:26:02.127 user 0m30.681s 00:26:02.127 sys 0m4.176s 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.127 ************************************ 00:26:02.127 END TEST nvmf_digest_error 00:26:02.127 ************************************ 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.127 rmmod nvme_tcp 00:26:02.127 rmmod nvme_fabrics 00:26:02.127 rmmod nvme_keyring 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3433699 ']' 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3433699 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3433699 ']' 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3433699 00:26:02.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3433699) - No such process 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3433699 is not found' 00:26:02.127 Process with pid 3433699 is not found 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.127 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.128 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:02.128 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:02.128 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.128 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.128 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.128 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.128 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.128 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.128 09:16:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.031 09:16:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.031 00:26:04.031 real 0m35.854s 00:26:04.031 user 1m3.814s 00:26:04.031 sys 0m10.181s 00:26:04.031 09:16:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:04.031 09:16:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:04.031 ************************************ 00:26:04.031 END TEST nvmf_digest 00:26:04.031 ************************************ 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.290 ************************************ 00:26:04.290 START TEST nvmf_bdevperf 00:26:04.290 ************************************ 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:04.290 * Looking for test storage... 00:26:04.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.290 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.291 --rc genhtml_branch_coverage=1 00:26:04.291 --rc genhtml_function_coverage=1 00:26:04.291 --rc genhtml_legend=1 00:26:04.291 --rc geninfo_all_blocks=1 00:26:04.291 --rc geninfo_unexecuted_blocks=1 00:26:04.291 00:26:04.291 ' 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.291 --rc genhtml_branch_coverage=1 00:26:04.291 --rc genhtml_function_coverage=1 00:26:04.291 --rc genhtml_legend=1 00:26:04.291 --rc geninfo_all_blocks=1 00:26:04.291 --rc geninfo_unexecuted_blocks=1 00:26:04.291 00:26:04.291 ' 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.291 --rc genhtml_branch_coverage=1 00:26:04.291 --rc genhtml_function_coverage=1 00:26:04.291 --rc genhtml_legend=1 00:26:04.291 --rc geninfo_all_blocks=1 00:26:04.291 --rc geninfo_unexecuted_blocks=1 00:26:04.291 00:26:04.291 ' 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.291 --rc genhtml_branch_coverage=1 00:26:04.291 --rc genhtml_function_coverage=1 00:26:04.291 --rc genhtml_legend=1 00:26:04.291 --rc geninfo_all_blocks=1 00:26:04.291 --rc geninfo_unexecuted_blocks=1 00:26:04.291 00:26:04.291 ' 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.291 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.292 09:16:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:06.825 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:06.825 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.825 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:06.826 Found net devices under 0000:09:00.0: cvl_0_0 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:06.826 Found net devices under 0000:09:00.1: cvl_0_1 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:06.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:26:06.826 00:26:06.826 --- 10.0.0.2 ping statistics --- 00:26:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.826 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:26:06.826 00:26:06.826 --- 10.0.0.1 ping statistics --- 00:26:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.826 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3437540 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3437540 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3437540 ']' 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:06.826 [2024-11-20 09:16:43.584552] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:26:06.826 [2024-11-20 09:16:43.584648] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.826 [2024-11-20 09:16:43.654919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:06.826 [2024-11-20 09:16:43.708990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.826 [2024-11-20 09:16:43.709041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.826 [2024-11-20 09:16:43.709069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.826 [2024-11-20 09:16:43.709081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.826 [2024-11-20 09:16:43.709090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.826 [2024-11-20 09:16:43.710651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.826 [2024-11-20 09:16:43.710698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.826 [2024-11-20 09:16:43.710702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:06.826 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.084 [2024-11-20 09:16:43.856463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.084 Malloc0 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.084 [2024-11-20 09:16:43.914757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:07.084 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:07.084 { 00:26:07.085 "params": { 00:26:07.085 "name": "Nvme$subsystem", 00:26:07.085 "trtype": "$TEST_TRANSPORT", 00:26:07.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.085 "adrfam": "ipv4", 00:26:07.085 "trsvcid": "$NVMF_PORT", 00:26:07.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.085 "hdgst": ${hdgst:-false}, 00:26:07.085 "ddgst": ${ddgst:-false} 00:26:07.085 }, 00:26:07.085 "method": "bdev_nvme_attach_controller" 00:26:07.085 } 00:26:07.085 EOF 00:26:07.085 )") 00:26:07.085 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:07.085 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:07.085 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:07.085 09:16:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:07.085 "params": { 00:26:07.085 "name": "Nvme1", 00:26:07.085 "trtype": "tcp", 00:26:07.085 "traddr": "10.0.0.2", 00:26:07.085 "adrfam": "ipv4", 00:26:07.085 "trsvcid": "4420", 00:26:07.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:07.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:07.085 "hdgst": false, 00:26:07.085 "ddgst": false 00:26:07.085 }, 00:26:07.085 "method": "bdev_nvme_attach_controller" 00:26:07.085 }' 00:26:07.085 [2024-11-20 09:16:43.966135] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:26:07.085 [2024-11-20 09:16:43.966214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437573 ] 00:26:07.085 [2024-11-20 09:16:44.034734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.085 [2024-11-20 09:16:44.093872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.650 Running I/O for 1 seconds... 00:26:08.583 8510.00 IOPS, 33.24 MiB/s 00:26:08.583 Latency(us) 00:26:08.583 [2024-11-20T08:16:45.612Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.583 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:08.583 Verification LBA range: start 0x0 length 0x4000 00:26:08.583 Nvme1n1 : 1.05 8256.67 32.25 0.00 0.00 14929.34 3094.76 47962.64 00:26:08.583 [2024-11-20T08:16:45.612Z] =================================================================================================================== 00:26:08.583 [2024-11-20T08:16:45.612Z] Total : 8256.67 32.25 0.00 0.00 14929.34 3094.76 47962.64 00:26:08.840 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3437726 00:26:08.840 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:08.840 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:08.840 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:08.841 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:08.841 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:08.841 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:08.841 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:08.841 { 00:26:08.841 "params": { 00:26:08.841 "name": "Nvme$subsystem", 00:26:08.841 "trtype": "$TEST_TRANSPORT", 00:26:08.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.841 "adrfam": "ipv4", 00:26:08.841 "trsvcid": "$NVMF_PORT", 00:26:08.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.841 "hdgst": ${hdgst:-false}, 00:26:08.841 "ddgst": ${ddgst:-false} 00:26:08.841 }, 00:26:08.841 "method": "bdev_nvme_attach_controller" 00:26:08.841 } 00:26:08.841 EOF 00:26:08.841 )") 00:26:08.841 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:08.841 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:08.841 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:08.841 09:16:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:08.841 "params": { 00:26:08.841 "name": "Nvme1", 00:26:08.841 "trtype": "tcp", 00:26:08.841 "traddr": "10.0.0.2", 00:26:08.841 "adrfam": "ipv4", 00:26:08.841 "trsvcid": "4420", 00:26:08.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:08.841 "hdgst": false, 00:26:08.841 "ddgst": false 00:26:08.841 }, 00:26:08.841 "method": "bdev_nvme_attach_controller" 00:26:08.841 }' 00:26:08.841 [2024-11-20 09:16:45.728126] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:26:08.841 [2024-11-20 09:16:45.728211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437726 ] 00:26:08.841 [2024-11-20 09:16:45.800994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.841 [2024-11-20 09:16:45.859167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.406 Running I/O for 15 seconds... 00:26:11.270 8287.00 IOPS, 32.37 MiB/s [2024-11-20T08:16:48.866Z] 8358.00 IOPS, 32.65 MiB/s [2024-11-20T08:16:48.866Z] 09:16:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3437540 00:26:11.837 09:16:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:11.837 [2024-11-20 09:16:48.690745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.837 [2024-11-20 09:16:48.690796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.690829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.837 [2024-11-20 09:16:48.690845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.690862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.690885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.690902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.690917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.690932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.690945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.690960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.690988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.837 [2024-11-20 09:16:48.691407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.837 [2024-11-20 09:16:48.691421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.691808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.691834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.691860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.691885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.691911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.691936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.691961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.691986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.691999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.692014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.692040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.692065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.692090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.692115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.692140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.692169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.838 [2024-11-20 09:16:48.692196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.838 [2024-11-20 09:16:48.692535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.838 [2024-11-20 09:16:48.692549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.692984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.692998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.693010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.693036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.693062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.693087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.693116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.839 [2024-11-20 09:16:48.693143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.839 [2024-11-20 09:16:48.693675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.839 [2024-11-20 09:16:48.693687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.693976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.693989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.840 [2024-11-20 09:16:48.694498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff78d0 is same with the state(6) to be set 00:26:11.840 [2024-11-20 09:16:48.694530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.840 [2024-11-20 09:16:48.694542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.840 [2024-11-20 09:16:48.694553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34984 len:8 PRP1 0x0 PRP2 0x0 00:26:11.840 [2024-11-20 09:16:48.694570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.840 [2024-11-20 09:16:48.694774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.840 [2024-11-20 09:16:48.694816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.840 [2024-11-20 09:16:48.694842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.840 [2024-11-20 09:16:48.694881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.840 [2024-11-20 09:16:48.694894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.840 [2024-11-20 09:16:48.698204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.840 [2024-11-20 09:16:48.698240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.840 [2024-11-20 09:16:48.698874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.840 [2024-11-20 09:16:48.698903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.840 [2024-11-20 09:16:48.698919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.840 [2024-11-20 09:16:48.699135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.840 [2024-11-20 09:16:48.699377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.840 [2024-11-20 09:16:48.699400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.841 [2024-11-20 09:16:48.699427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.841 [2024-11-20 09:16:48.699444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.841 [2024-11-20 09:16:48.711748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.841 [2024-11-20 09:16:48.712164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.841 [2024-11-20 09:16:48.712217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.841 [2024-11-20 09:16:48.712232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.841 [2024-11-20 09:16:48.712470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.841 [2024-11-20 09:16:48.712709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.841 [2024-11-20 09:16:48.712729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.841 [2024-11-20 09:16:48.712741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.841 [2024-11-20 09:16:48.712758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.841 [2024-11-20 09:16:48.724875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.841 [2024-11-20 09:16:48.725389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.841 [2024-11-20 09:16:48.725419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.841 [2024-11-20 09:16:48.725435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.841 [2024-11-20 09:16:48.725674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.841 [2024-11-20 09:16:48.725882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.841 [2024-11-20 09:16:48.725902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.841 [2024-11-20 09:16:48.725914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.841 [2024-11-20 09:16:48.725925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.841 [2024-11-20 09:16:48.737949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.841 [2024-11-20 09:16:48.738374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.841 [2024-11-20 09:16:48.738403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.841 [2024-11-20 09:16:48.738419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.841 [2024-11-20 09:16:48.738660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.841 [2024-11-20 09:16:48.738886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.841 [2024-11-20 09:16:48.738905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.841 [2024-11-20 09:16:48.738918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.841 [2024-11-20 09:16:48.738929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.841 [2024-11-20 09:16:48.751043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.841 [2024-11-20 09:16:48.751433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.841 [2024-11-20 09:16:48.751476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.841 [2024-11-20 09:16:48.751492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.841 [2024-11-20 09:16:48.751713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.841 [2024-11-20 09:16:48.751922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.841 [2024-11-20 09:16:48.751941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.841 [2024-11-20 09:16:48.751953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.841 [2024-11-20 09:16:48.751964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.841 [2024-11-20 09:16:48.764092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.841 [2024-11-20 09:16:48.764530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.841 [2024-11-20 09:16:48.764573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.841 [2024-11-20 09:16:48.764589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.841 [2024-11-20 09:16:48.764833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.841 [2024-11-20 09:16:48.765041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.841 [2024-11-20 09:16:48.765060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.841 [2024-11-20 09:16:48.765072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.841 [2024-11-20 09:16:48.765083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.841 [2024-11-20 09:16:48.777251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.841 [2024-11-20 09:16:48.777622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.841 [2024-11-20 09:16:48.777650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.841 [2024-11-20 09:16:48.777665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.841 [2024-11-20 09:16:48.777902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.841 [2024-11-20 09:16:48.778111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.841 [2024-11-20 09:16:48.778130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.841 [2024-11-20 09:16:48.778142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.841 [2024-11-20 09:16:48.778153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.841 [2024-11-20 09:16:48.790534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.841 [2024-11-20 09:16:48.790980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.841 [2024-11-20 09:16:48.791008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.841 [2024-11-20 09:16:48.791023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.841 [2024-11-20 09:16:48.791257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.841 [2024-11-20 09:16:48.791465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.841 [2024-11-20 09:16:48.791486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.841 [2024-11-20 09:16:48.791498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.841 [2024-11-20 09:16:48.791509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.841 [2024-11-20 09:16:48.803551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.841 [2024-11-20 09:16:48.803854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.841 [2024-11-20 09:16:48.803894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.841 [2024-11-20 09:16:48.803908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.841 [2024-11-20 09:16:48.804127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.841 [2024-11-20 09:16:48.804364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.841 [2024-11-20 09:16:48.804384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.841 [2024-11-20 09:16:48.804397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.841 [2024-11-20 09:16:48.804409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.841 [2024-11-20 09:16:48.816641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.841 [2024-11-20 09:16:48.817037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.841 [2024-11-20 09:16:48.817065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.841 [2024-11-20 09:16:48.817080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.841 [2024-11-20 09:16:48.817310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.841 [2024-11-20 09:16:48.817526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.842 [2024-11-20 09:16:48.817546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.842 [2024-11-20 09:16:48.817558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.842 [2024-11-20 09:16:48.817570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.842 [2024-11-20 09:16:48.829897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.842 [2024-11-20 09:16:48.830309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.842 [2024-11-20 09:16:48.830337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.842 [2024-11-20 09:16:48.830367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.842 [2024-11-20 09:16:48.830608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.842 [2024-11-20 09:16:48.830816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.842 [2024-11-20 09:16:48.830835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.842 [2024-11-20 09:16:48.830847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.842 [2024-11-20 09:16:48.830858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.842 [2024-11-20 09:16:48.843127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.842 [2024-11-20 09:16:48.843516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.842 [2024-11-20 09:16:48.843557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.842 [2024-11-20 09:16:48.843573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.842 [2024-11-20 09:16:48.843794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.842 [2024-11-20 09:16:48.844003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.842 [2024-11-20 09:16:48.844026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.842 [2024-11-20 09:16:48.844039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.842 [2024-11-20 09:16:48.844050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.842 [2024-11-20 09:16:48.856426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.842 [2024-11-20 09:16:48.856819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.842 [2024-11-20 09:16:48.856847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:11.842 [2024-11-20 09:16:48.856862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:11.842 [2024-11-20 09:16:48.857096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:11.842 [2024-11-20 09:16:48.857327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.842 [2024-11-20 09:16:48.857355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.842 [2024-11-20 09:16:48.857367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.842 [2024-11-20 09:16:48.857379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.101 [2024-11-20 09:16:48.869600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.101 [2024-11-20 09:16:48.869966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.101 [2024-11-20 09:16:48.870009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.101 [2024-11-20 09:16:48.870024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.101 [2024-11-20 09:16:48.870279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.101 [2024-11-20 09:16:48.870560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.101 [2024-11-20 09:16:48.870595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.101 [2024-11-20 09:16:48.870610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.101 [2024-11-20 09:16:48.870622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.101 [2024-11-20 09:16:48.882768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.101 [2024-11-20 09:16:48.883132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.101 [2024-11-20 09:16:48.883175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.101 [2024-11-20 09:16:48.883191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.101 [2024-11-20 09:16:48.883454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.101 [2024-11-20 09:16:48.883666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.101 [2024-11-20 09:16:48.883685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.101 [2024-11-20 09:16:48.883697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.101 [2024-11-20 09:16:48.883713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.101 [2024-11-20 09:16:48.895906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.101 [2024-11-20 09:16:48.896240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.101 [2024-11-20 09:16:48.896268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.101 [2024-11-20 09:16:48.896283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.101 [2024-11-20 09:16:48.896547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.101 [2024-11-20 09:16:48.896777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.101 [2024-11-20 09:16:48.896796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.101 [2024-11-20 09:16:48.896807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.101 [2024-11-20 09:16:48.896818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.101 [2024-11-20 09:16:48.908992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.101 [2024-11-20 09:16:48.909369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.101 [2024-11-20 09:16:48.909411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.101 [2024-11-20 09:16:48.909426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.101 [2024-11-20 09:16:48.909647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.101 [2024-11-20 09:16:48.909856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.101 [2024-11-20 09:16:48.909875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.101 [2024-11-20 09:16:48.909887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.101 [2024-11-20 09:16:48.909898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.101 [2024-11-20 09:16:48.922060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.101 [2024-11-20 09:16:48.922457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.101 [2024-11-20 09:16:48.922485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.101 [2024-11-20 09:16:48.922500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.101 [2024-11-20 09:16:48.922720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.101 [2024-11-20 09:16:48.922946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.101 [2024-11-20 09:16:48.922965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.101 [2024-11-20 09:16:48.922977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.101 [2024-11-20 09:16:48.922988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.101 [2024-11-20 09:16:48.935171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.101 [2024-11-20 09:16:48.935492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.101 [2024-11-20 09:16:48.935534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.101 [2024-11-20 09:16:48.935549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.101 [2024-11-20 09:16:48.935750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.101 [2024-11-20 09:16:48.935974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.101 [2024-11-20 09:16:48.935993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.101 [2024-11-20 09:16:48.936005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.101 [2024-11-20 09:16:48.936016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.101 [2024-11-20 09:16:48.948430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.101 [2024-11-20 09:16:48.948794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.101 [2024-11-20 09:16:48.948822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.101 [2024-11-20 09:16:48.948837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.101 [2024-11-20 09:16:48.949087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:48.949292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:48.949337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:48.949351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:48.949364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:48.962197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:48.962576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:48.962606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:48.962623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.102 [2024-11-20 09:16:48.962836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:48.963055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:48.963076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:48.963090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:48.963117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:48.975660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:48.976026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:48.976054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:48.976070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.102 [2024-11-20 09:16:48.976322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:48.976542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:48.976562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:48.976575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:48.976587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:48.988859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:48.989269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:48.989298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:48.989322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.102 [2024-11-20 09:16:48.989552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:48.989782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:48.989801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:48.989813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:48.989824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:49.002005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:49.002402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:49.002431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:49.002447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.102 [2024-11-20 09:16:49.002675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:49.002901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:49.002919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:49.002931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:49.002943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:49.015229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:49.015744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:49.015787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:49.015804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.102 [2024-11-20 09:16:49.016054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:49.016262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:49.016285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:49.016298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:49.016335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:49.028213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:49.028658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:49.028701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:49.028717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.102 [2024-11-20 09:16:49.028958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:49.029151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:49.029169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:49.029181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:49.029192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:49.041267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:49.041680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:49.041724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:49.041738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.102 [2024-11-20 09:16:49.041973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:49.042182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:49.042201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:49.042213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:49.042224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:49.054530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:49.054981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:49.055036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:49.055050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.102 [2024-11-20 09:16:49.055321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:49.055539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:49.055560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:49.055573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:49.055590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:49.067685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:49.068097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:49.068144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:49.068159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.102 [2024-11-20 09:16:49.068436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.102 [2024-11-20 09:16:49.068650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.102 [2024-11-20 09:16:49.068669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.102 [2024-11-20 09:16:49.068682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.102 [2024-11-20 09:16:49.068693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.102 [2024-11-20 09:16:49.081166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.102 [2024-11-20 09:16:49.081516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.102 [2024-11-20 09:16:49.081544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.102 [2024-11-20 09:16:49.081559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.103 [2024-11-20 09:16:49.081790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.103 [2024-11-20 09:16:49.081997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.103 [2024-11-20 09:16:49.082016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.103 [2024-11-20 09:16:49.082028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.103 [2024-11-20 09:16:49.082039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.103 [2024-11-20 09:16:49.094336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.103 [2024-11-20 09:16:49.094716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.103 [2024-11-20 09:16:49.094743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.103 [2024-11-20 09:16:49.094758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.103 [2024-11-20 09:16:49.094979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.103 [2024-11-20 09:16:49.095189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.103 [2024-11-20 09:16:49.095208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.103 [2024-11-20 09:16:49.095220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.103 [2024-11-20 09:16:49.095231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.103 [2024-11-20 09:16:49.107579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.103 [2024-11-20 09:16:49.108071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.103 [2024-11-20 09:16:49.108125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.103 [2024-11-20 09:16:49.108140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.103 [2024-11-20 09:16:49.108400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.103 [2024-11-20 09:16:49.108614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.103 [2024-11-20 09:16:49.108633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.103 [2024-11-20 09:16:49.108645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.103 [2024-11-20 09:16:49.108656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.103 [2024-11-20 09:16:49.120738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.103 [2024-11-20 09:16:49.121153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.103 [2024-11-20 09:16:49.121206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.103 [2024-11-20 09:16:49.121220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.103 [2024-11-20 09:16:49.121457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.103 [2024-11-20 09:16:49.121669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.103 [2024-11-20 09:16:49.121688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.103 [2024-11-20 09:16:49.121700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.103 [2024-11-20 09:16:49.121711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.362 [2024-11-20 09:16:49.134396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.362 [2024-11-20 09:16:49.134832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.362 [2024-11-20 09:16:49.134884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.362 [2024-11-20 09:16:49.134898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.362 [2024-11-20 09:16:49.135158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.362 [2024-11-20 09:16:49.135377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.362 [2024-11-20 09:16:49.135397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.362 [2024-11-20 09:16:49.135410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.362 [2024-11-20 09:16:49.135421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.362 [2024-11-20 09:16:49.147722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.362 [2024-11-20 09:16:49.148217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.362 [2024-11-20 09:16:49.148260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.362 [2024-11-20 09:16:49.148276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.362 [2024-11-20 09:16:49.148527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.362 [2024-11-20 09:16:49.148757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.362 [2024-11-20 09:16:49.148775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.362 [2024-11-20 09:16:49.148787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.362 [2024-11-20 09:16:49.148798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.362 [2024-11-20 09:16:49.160893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.362 [2024-11-20 09:16:49.161260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.362 [2024-11-20 09:16:49.161287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.362 [2024-11-20 09:16:49.161313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.362 [2024-11-20 09:16:49.161550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.362 [2024-11-20 09:16:49.161760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.362 [2024-11-20 09:16:49.161779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.362 [2024-11-20 09:16:49.161791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.362 [2024-11-20 09:16:49.161802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.362 [2024-11-20 09:16:49.174445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.362 [2024-11-20 09:16:49.174797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.362 [2024-11-20 09:16:49.174839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.362 [2024-11-20 09:16:49.174854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.362 [2024-11-20 09:16:49.175075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.362 [2024-11-20 09:16:49.175315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.362 [2024-11-20 09:16:49.175336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.362 [2024-11-20 09:16:49.175364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.362 [2024-11-20 09:16:49.175377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.362 [2024-11-20 09:16:49.187905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.362 [2024-11-20 09:16:49.188347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.362 [2024-11-20 09:16:49.188377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.362 [2024-11-20 09:16:49.188393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.362 [2024-11-20 09:16:49.188625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.362 [2024-11-20 09:16:49.188841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.362 [2024-11-20 09:16:49.188865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.362 [2024-11-20 09:16:49.188878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.362 [2024-11-20 09:16:49.188890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.362 [2024-11-20 09:16:49.201133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.362 [2024-11-20 09:16:49.201530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.362 [2024-11-20 09:16:49.201560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.362 [2024-11-20 09:16:49.201576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.362 [2024-11-20 09:16:49.201805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.362 [2024-11-20 09:16:49.202030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.362 [2024-11-20 09:16:49.202050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.362 [2024-11-20 09:16:49.202063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.362 [2024-11-20 09:16:49.202075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.362 6897.67 IOPS, 26.94 MiB/s [2024-11-20T08:16:49.391Z] [2024-11-20 09:16:49.216101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.362 [2024-11-20 09:16:49.216486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.362 [2024-11-20 09:16:49.216515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.362 [2024-11-20 09:16:49.216531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.362 [2024-11-20 09:16:49.216771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.362 [2024-11-20 09:16:49.216985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.362 [2024-11-20 09:16:49.217004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.362 [2024-11-20 09:16:49.217016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.362 [2024-11-20 09:16:49.217027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.362 [2024-11-20 09:16:49.229559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.362 [2024-11-20 09:16:49.229931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.362 [2024-11-20 09:16:49.229960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.229976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.230217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.230449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.230471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.230485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.230502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.242676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.243146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.363 [2024-11-20 09:16:49.243175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.243206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.243454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.243689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.243708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.243720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.243731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.255988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.256324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.363 [2024-11-20 09:16:49.256356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.256372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.256586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.256795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.256814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.256826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.256838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.269292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.269662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.363 [2024-11-20 09:16:49.269689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.269705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.269927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.270134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.270152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.270164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.270175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.282557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.282915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.363 [2024-11-20 09:16:49.282946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.282979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.283215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.283435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.283455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.283467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.283479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.295769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.296121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.363 [2024-11-20 09:16:49.296148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.296163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.296390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.296610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.296629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.296642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.296653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.308968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.309343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.363 [2024-11-20 09:16:49.309386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.309401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.309654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.309852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.309871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.309884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.309896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.322088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.322465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.363 [2024-11-20 09:16:49.322493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.322508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.322747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.322955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.322974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.322985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.322997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.335226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.335620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.363 [2024-11-20 09:16:49.335649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.335664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.335898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.336106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.336125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.336136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.336147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.348440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.348820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.363 [2024-11-20 09:16:49.348862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.363 [2024-11-20 09:16:49.348877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.363 [2024-11-20 09:16:49.349118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.363 [2024-11-20 09:16:49.349337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.363 [2024-11-20 09:16:49.349357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.363 [2024-11-20 09:16:49.349369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.363 [2024-11-20 09:16:49.349381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.363 [2024-11-20 09:16:49.361500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.363 [2024-11-20 09:16:49.361899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.364 [2024-11-20 09:16:49.361970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.364 [2024-11-20 09:16:49.361984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.364 [2024-11-20 09:16:49.362224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.364 [2024-11-20 09:16:49.362425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.364 [2024-11-20 09:16:49.362449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.364 [2024-11-20 09:16:49.362461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.364 [2024-11-20 09:16:49.362472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.364 [2024-11-20 09:16:49.374614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.364 [2024-11-20 09:16:49.374917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.364 [2024-11-20 09:16:49.374943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.364 [2024-11-20 09:16:49.374957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.364 [2024-11-20 09:16:49.375151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.364 [2024-11-20 09:16:49.375385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.364 [2024-11-20 09:16:49.375405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.364 [2024-11-20 09:16:49.375417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.364 [2024-11-20 09:16:49.375429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.623 [2024-11-20 09:16:49.388447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.623 [2024-11-20 09:16:49.388893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.623 [2024-11-20 09:16:49.388920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.623 [2024-11-20 09:16:49.388950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.623 [2024-11-20 09:16:49.389192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.623 [2024-11-20 09:16:49.389444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.623 [2024-11-20 09:16:49.389464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.623 [2024-11-20 09:16:49.389476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.623 [2024-11-20 09:16:49.389488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.623 [2024-11-20 09:16:49.401723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.623 [2024-11-20 09:16:49.402197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.623 [2024-11-20 09:16:49.402250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.623 [2024-11-20 09:16:49.402265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.623 [2024-11-20 09:16:49.402542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.623 [2024-11-20 09:16:49.402752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.623 [2024-11-20 09:16:49.402771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.623 [2024-11-20 09:16:49.402783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.623 [2024-11-20 09:16:49.402798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.623 [2024-11-20 09:16:49.414806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.623 [2024-11-20 09:16:49.415196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.623 [2024-11-20 09:16:49.415228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.623 [2024-11-20 09:16:49.415245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.623 [2024-11-20 09:16:49.415526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.623 [2024-11-20 09:16:49.415738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.623 [2024-11-20 09:16:49.415757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.623 [2024-11-20 09:16:49.415769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.623 [2024-11-20 09:16:49.415781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.623 [2024-11-20 09:16:49.427991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.623 [2024-11-20 09:16:49.428408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.623 [2024-11-20 09:16:49.428450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.623 [2024-11-20 09:16:49.428465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.623 [2024-11-20 09:16:49.428697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.623 [2024-11-20 09:16:49.428890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.623 [2024-11-20 09:16:49.428909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.623 [2024-11-20 09:16:49.428921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.623 [2024-11-20 09:16:49.428932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.623 [2024-11-20 09:16:49.441175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.623 [2024-11-20 09:16:49.441546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.623 [2024-11-20 09:16:49.441588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.623 [2024-11-20 09:16:49.441603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.623 [2024-11-20 09:16:49.441850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.623 [2024-11-20 09:16:49.442048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.623 [2024-11-20 09:16:49.442067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.623 [2024-11-20 09:16:49.442079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.623 [2024-11-20 09:16:49.442091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.623 [2024-11-20 09:16:49.454448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.623 [2024-11-20 09:16:49.454859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.623 [2024-11-20 09:16:49.454888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.623 [2024-11-20 09:16:49.454904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.623 [2024-11-20 09:16:49.455133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.623 [2024-11-20 09:16:49.455396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.623 [2024-11-20 09:16:49.455417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.623 [2024-11-20 09:16:49.455430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.623 [2024-11-20 09:16:49.455442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.623 [2024-11-20 09:16:49.467861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.623 [2024-11-20 09:16:49.468256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.623 [2024-11-20 09:16:49.468284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.623 [2024-11-20 09:16:49.468299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.623 [2024-11-20 09:16:49.468552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.623 [2024-11-20 09:16:49.468783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.623 [2024-11-20 09:16:49.468801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.623 [2024-11-20 09:16:49.468813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.623 [2024-11-20 09:16:49.468824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.623 [2024-11-20 09:16:49.481187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.623 [2024-11-20 09:16:49.481959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.623 [2024-11-20 09:16:49.482000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.623 [2024-11-20 09:16:49.482014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.623 [2024-11-20 09:16:49.482224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.623 [2024-11-20 09:16:49.482480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.623 [2024-11-20 09:16:49.482502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.623 [2024-11-20 09:16:49.482516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.623 [2024-11-20 09:16:49.482530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.494822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.495202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.495244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.495259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.495515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.495753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.624 [2024-11-20 09:16:49.495772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.624 [2024-11-20 09:16:49.495799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.624 [2024-11-20 09:16:49.495811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.508159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.508539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.508569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.508585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.508826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.509026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.624 [2024-11-20 09:16:49.509045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.624 [2024-11-20 09:16:49.509058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.624 [2024-11-20 09:16:49.509070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.521483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.521873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.521915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.521929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.522163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.522391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.624 [2024-11-20 09:16:49.522412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.624 [2024-11-20 09:16:49.522425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.624 [2024-11-20 09:16:49.522437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.534641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.535077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.535105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.535120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.535358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.535570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.624 [2024-11-20 09:16:49.535610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.624 [2024-11-20 09:16:49.535624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.624 [2024-11-20 09:16:49.535636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.547918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.548289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.548325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.548352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.548582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.548798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.624 [2024-11-20 09:16:49.548818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.624 [2024-11-20 09:16:49.548830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.624 [2024-11-20 09:16:49.548842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.561250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.561597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.561642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.561658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.561894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.562109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.624 [2024-11-20 09:16:49.562128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.624 [2024-11-20 09:16:49.562140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.624 [2024-11-20 09:16:49.562151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.574442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.574833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.574876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.574891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.575145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.575387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.624 [2024-11-20 09:16:49.575408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.624 [2024-11-20 09:16:49.575420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.624 [2024-11-20 09:16:49.575437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.587676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.588048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.588077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.588093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.588345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.588572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.624 [2024-11-20 09:16:49.588608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.624 [2024-11-20 09:16:49.588621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.624 [2024-11-20 09:16:49.588633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.600946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.601317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.601360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.601376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.601626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.601841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.624 [2024-11-20 09:16:49.601860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.624 [2024-11-20 09:16:49.601873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.624 [2024-11-20 09:16:49.601884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.624 [2024-11-20 09:16:49.614145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.624 [2024-11-20 09:16:49.614526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.624 [2024-11-20 09:16:49.614555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.624 [2024-11-20 09:16:49.614571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.624 [2024-11-20 09:16:49.614798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.624 [2024-11-20 09:16:49.615013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.625 [2024-11-20 09:16:49.615032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.625 [2024-11-20 09:16:49.615044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.625 [2024-11-20 09:16:49.615056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.625 [2024-11-20 09:16:49.627450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.625 [2024-11-20 09:16:49.627847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.625 [2024-11-20 09:16:49.627891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.625 [2024-11-20 09:16:49.627906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.625 [2024-11-20 09:16:49.628173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.625 [2024-11-20 09:16:49.628401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.625 [2024-11-20 09:16:49.628422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.625 [2024-11-20 09:16:49.628435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.625 [2024-11-20 09:16:49.628446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.625 [2024-11-20 09:16:49.640681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.625 [2024-11-20 09:16:49.641053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.625 [2024-11-20 09:16:49.641096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.625 [2024-11-20 09:16:49.641112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.625 [2024-11-20 09:16:49.641375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.625 [2024-11-20 09:16:49.641581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.625 [2024-11-20 09:16:49.641600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.625 [2024-11-20 09:16:49.641626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.625 [2024-11-20 09:16:49.641638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.884 [2024-11-20 09:16:49.654215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.884 [2024-11-20 09:16:49.654529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.884 [2024-11-20 09:16:49.654558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.884 [2024-11-20 09:16:49.654574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.884 [2024-11-20 09:16:49.654802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.884 [2024-11-20 09:16:49.655046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.884 [2024-11-20 09:16:49.655067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.884 [2024-11-20 09:16:49.655080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.884 [2024-11-20 09:16:49.655092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.884 [2024-11-20 09:16:49.667405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.884 [2024-11-20 09:16:49.667795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.667824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.667839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.668086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.668284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.668311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.668342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.668354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.885 [2024-11-20 09:16:49.680738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.885 [2024-11-20 09:16:49.681109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.681152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.681167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.681434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.681654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.681673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.681685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.681697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.885 [2024-11-20 09:16:49.693940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.885 [2024-11-20 09:16:49.694252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.694294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.694318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.694577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.694794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.694814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.694826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.694837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.885 [2024-11-20 09:16:49.707164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.885 [2024-11-20 09:16:49.707553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.707582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.707597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.707828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.708065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.708091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.708122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.708135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.885 [2024-11-20 09:16:49.720624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.885 [2024-11-20 09:16:49.720978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.721007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.721022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.721250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.721502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.721525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.721538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.721551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.885 [2024-11-20 09:16:49.733885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.885 [2024-11-20 09:16:49.734322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.734351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.734366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.734608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.734806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.734825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.734839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.734850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.885 [2024-11-20 09:16:49.747262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.885 [2024-11-20 09:16:49.747694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.747722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.747739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.747967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.748182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.748201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.748213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.748230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.885 [2024-11-20 09:16:49.760521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.885 [2024-11-20 09:16:49.760860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.760903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.760919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.761140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.761382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.761403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.761415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.761427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.885 [2024-11-20 09:16:49.773765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.885 [2024-11-20 09:16:49.774201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.774229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.774245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.774497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.774715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.774734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.774746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.774758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.885 [2024-11-20 09:16:49.786935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.885 [2024-11-20 09:16:49.787310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.885 [2024-11-20 09:16:49.787338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.885 [2024-11-20 09:16:49.787354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.885 [2024-11-20 09:16:49.787595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.885 [2024-11-20 09:16:49.787794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.885 [2024-11-20 09:16:49.787813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.885 [2024-11-20 09:16:49.787826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.885 [2024-11-20 09:16:49.787837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.886 [2024-11-20 09:16:49.800288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.886 [2024-11-20 09:16:49.800669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-11-20 09:16:49.800696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.886 [2024-11-20 09:16:49.800711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.886 [2024-11-20 09:16:49.800930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.886 [2024-11-20 09:16:49.801143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.886 [2024-11-20 09:16:49.801162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.886 [2024-11-20 09:16:49.801174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.886 [2024-11-20 09:16:49.801186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.886 [2024-11-20 09:16:49.813517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.886 [2024-11-20 09:16:49.813909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-11-20 09:16:49.813937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.886 [2024-11-20 09:16:49.813954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.886 [2024-11-20 09:16:49.814194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.886 [2024-11-20 09:16:49.814435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.886 [2024-11-20 09:16:49.814456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.886 [2024-11-20 09:16:49.814468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.886 [2024-11-20 09:16:49.814480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.886 [2024-11-20 09:16:49.826871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.886 [2024-11-20 09:16:49.827241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-11-20 09:16:49.827269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.886 [2024-11-20 09:16:49.827285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.886 [2024-11-20 09:16:49.827522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.886 [2024-11-20 09:16:49.827755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.886 [2024-11-20 09:16:49.827775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.886 [2024-11-20 09:16:49.827787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.886 [2024-11-20 09:16:49.827798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.886 [2024-11-20 09:16:49.840056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.886 [2024-11-20 09:16:49.840413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-11-20 09:16:49.840442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.886 [2024-11-20 09:16:49.840458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.886 [2024-11-20 09:16:49.840691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.886 [2024-11-20 09:16:49.840906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.886 [2024-11-20 09:16:49.840925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.886 [2024-11-20 09:16:49.840938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.886 [2024-11-20 09:16:49.840949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.886 [2024-11-20 09:16:49.853328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.886 [2024-11-20 09:16:49.853720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-11-20 09:16:49.853748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.886 [2024-11-20 09:16:49.853764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.886 [2024-11-20 09:16:49.853977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.886 [2024-11-20 09:16:49.854196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.886 [2024-11-20 09:16:49.854216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.886 [2024-11-20 09:16:49.854229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.886 [2024-11-20 09:16:49.854240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.886 [2024-11-20 09:16:49.866561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.886 [2024-11-20 09:16:49.866956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-11-20 09:16:49.866984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.886 [2024-11-20 09:16:49.867000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.886 [2024-11-20 09:16:49.867240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.886 [2024-11-20 09:16:49.867468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.886 [2024-11-20 09:16:49.867488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.886 [2024-11-20 09:16:49.867501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.886 [2024-11-20 09:16:49.867513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.886 [2024-11-20 09:16:49.879945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.886 [2024-11-20 09:16:49.880282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-11-20 09:16:49.880318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.886 [2024-11-20 09:16:49.880359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.886 [2024-11-20 09:16:49.880589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.886 [2024-11-20 09:16:49.880806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.886 [2024-11-20 09:16:49.880831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.886 [2024-11-20 09:16:49.880844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.886 [2024-11-20 09:16:49.880855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.886 [2024-11-20 09:16:49.893247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.886 [2024-11-20 09:16:49.893644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-11-20 09:16:49.893687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.886 [2024-11-20 09:16:49.893702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.886 [2024-11-20 09:16:49.893936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.886 [2024-11-20 09:16:49.894135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.886 [2024-11-20 09:16:49.894154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.886 [2024-11-20 09:16:49.894166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.886 [2024-11-20 09:16:49.894178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.886 [2024-11-20 09:16:49.906651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:12.886 [2024-11-20 09:16:49.907010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.886 [2024-11-20 09:16:49.907039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:12.886 [2024-11-20 09:16:49.907055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:12.886 [2024-11-20 09:16:49.907285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:12.886 [2024-11-20 09:16:49.907522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.886 [2024-11-20 09:16:49.907544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.886 [2024-11-20 09:16:49.907557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.886 [2024-11-20 09:16:49.907569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.146 [2024-11-20 09:16:49.919925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.146 [2024-11-20 09:16:49.920298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.146 [2024-11-20 09:16:49.920333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.146 [2024-11-20 09:16:49.920349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.146 [2024-11-20 09:16:49.920578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.146 [2024-11-20 09:16:49.920793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.146 [2024-11-20 09:16:49.920812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.146 [2024-11-20 09:16:49.920825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.146 [2024-11-20 09:16:49.920842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.146 [2024-11-20 09:16:49.933184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.146 [2024-11-20 09:16:49.933525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.146 [2024-11-20 09:16:49.933552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.146 [2024-11-20 09:16:49.933568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.146 [2024-11-20 09:16:49.933792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.146 [2024-11-20 09:16:49.934006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:49.934025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:49.934037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:49.934049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:49.946489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:49.946821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:49.946863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:49.946878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.147 [2024-11-20 09:16:49.947084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.147 [2024-11-20 09:16:49.947320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:49.947341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:49.947370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:49.947382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:49.959739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:49.960105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:49.960133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:49.960149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.147 [2024-11-20 09:16:49.960372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.147 [2024-11-20 09:16:49.960591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:49.960612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:49.960625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:49.960637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:49.973013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:49.973416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:49.973445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:49.973460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.147 [2024-11-20 09:16:49.973690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.147 [2024-11-20 09:16:49.973904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:49.973923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:49.973935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:49.973947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:49.986324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:49.986693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:49.986721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:49.986736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.147 [2024-11-20 09:16:49.986977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.147 [2024-11-20 09:16:49.987176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:49.987195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:49.987207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:49.987219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:49.999565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:49.999955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:49.999984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:50.000000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.147 [2024-11-20 09:16:50.000229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.147 [2024-11-20 09:16:50.000483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:50.000505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:50.000519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:50.000532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:50.013004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:50.013336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:50.013365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:50.013381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.147 [2024-11-20 09:16:50.013623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.147 [2024-11-20 09:16:50.013845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:50.013865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:50.013878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:50.013890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:50.026515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:50.026934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:50.026965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:50.026981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.147 [2024-11-20 09:16:50.027211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.147 [2024-11-20 09:16:50.027465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:50.027487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:50.027501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:50.027513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:50.040011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:50.040363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:50.040393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:50.040409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.147 [2024-11-20 09:16:50.040622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.147 [2024-11-20 09:16:50.040869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:50.040891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:50.040905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:50.040918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:50.053512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:50.053876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:50.053905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:50.053922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.147 [2024-11-20 09:16:50.054150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.147 [2024-11-20 09:16:50.054391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.147 [2024-11-20 09:16:50.054422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.147 [2024-11-20 09:16:50.054436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.147 [2024-11-20 09:16:50.054450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.147 [2024-11-20 09:16:50.066931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.147 [2024-11-20 09:16:50.067298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.147 [2024-11-20 09:16:50.067334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.147 [2024-11-20 09:16:50.067350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.148 [2024-11-20 09:16:50.067578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.148 [2024-11-20 09:16:50.067821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.148 [2024-11-20 09:16:50.067842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.148 [2024-11-20 09:16:50.067855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.148 [2024-11-20 09:16:50.067868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.148 [2024-11-20 09:16:50.080526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.148 [2024-11-20 09:16:50.080924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.148 [2024-11-20 09:16:50.080953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.148 [2024-11-20 09:16:50.080969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.148 [2024-11-20 09:16:50.081182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.148 [2024-11-20 09:16:50.081454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.148 [2024-11-20 09:16:50.081475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.148 [2024-11-20 09:16:50.081489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.148 [2024-11-20 09:16:50.081501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.148 [2024-11-20 09:16:50.094136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.148 [2024-11-20 09:16:50.094487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.148 [2024-11-20 09:16:50.094516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.148 [2024-11-20 09:16:50.094532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.148 [2024-11-20 09:16:50.094760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.148 [2024-11-20 09:16:50.094975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.148 [2024-11-20 09:16:50.094995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.148 [2024-11-20 09:16:50.095008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.148 [2024-11-20 09:16:50.095025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.148 [2024-11-20 09:16:50.107797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.148 [2024-11-20 09:16:50.108170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.148 [2024-11-20 09:16:50.108212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.148 [2024-11-20 09:16:50.108226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.148 [2024-11-20 09:16:50.108464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.148 [2024-11-20 09:16:50.108716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.148 [2024-11-20 09:16:50.108736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.148 [2024-11-20 09:16:50.108749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.148 [2024-11-20 09:16:50.108760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.148 [2024-11-20 09:16:50.121217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.148 [2024-11-20 09:16:50.121647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.148 [2024-11-20 09:16:50.121675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.148 [2024-11-20 09:16:50.121691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.148 [2024-11-20 09:16:50.121936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.148 [2024-11-20 09:16:50.122149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.148 [2024-11-20 09:16:50.122169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.148 [2024-11-20 09:16:50.122181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.148 [2024-11-20 09:16:50.122192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.148 [2024-11-20 09:16:50.134807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.148 [2024-11-20 09:16:50.135119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.148 [2024-11-20 09:16:50.135145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.148 [2024-11-20 09:16:50.135160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.148 [2024-11-20 09:16:50.135402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.148 [2024-11-20 09:16:50.135662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.148 [2024-11-20 09:16:50.135682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.148 [2024-11-20 09:16:50.135695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.148 [2024-11-20 09:16:50.135707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.148 [2024-11-20 09:16:50.148381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.148 [2024-11-20 09:16:50.148787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.148 [2024-11-20 09:16:50.148815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.148 [2024-11-20 09:16:50.148831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.148 [2024-11-20 09:16:50.149059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.148 [2024-11-20 09:16:50.149273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.148 [2024-11-20 09:16:50.149316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.148 [2024-11-20 09:16:50.149331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.148 [2024-11-20 09:16:50.149343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.148 [2024-11-20 09:16:50.161796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.148 [2024-11-20 09:16:50.162126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.148 [2024-11-20 09:16:50.162153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.148 [2024-11-20 09:16:50.162168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.148 [2024-11-20 09:16:50.162399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.148 [2024-11-20 09:16:50.162619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.148 [2024-11-20 09:16:50.162639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.148 [2024-11-20 09:16:50.162667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.148 [2024-11-20 09:16:50.162679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.407 [2024-11-20 09:16:50.175418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.407 [2024-11-20 09:16:50.175831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.407 [2024-11-20 09:16:50.175874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.407 [2024-11-20 09:16:50.175890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.407 [2024-11-20 09:16:50.176124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.407 [2024-11-20 09:16:50.176354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.407 [2024-11-20 09:16:50.176376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.407 [2024-11-20 09:16:50.176390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.407 [2024-11-20 09:16:50.176403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.407 [2024-11-20 09:16:50.188753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.407 [2024-11-20 09:16:50.189196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.407 [2024-11-20 09:16:50.189225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.407 [2024-11-20 09:16:50.189240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.407 [2024-11-20 09:16:50.189469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.407 [2024-11-20 09:16:50.189737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.407 [2024-11-20 09:16:50.189757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.407 [2024-11-20 09:16:50.189769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.407 [2024-11-20 09:16:50.189782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.407 [2024-11-20 09:16:50.202175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.407 [2024-11-20 09:16:50.202553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.407 [2024-11-20 09:16:50.202582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.407 [2024-11-20 09:16:50.202597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.407 [2024-11-20 09:16:50.202825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.407 [2024-11-20 09:16:50.203070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.407 [2024-11-20 09:16:50.203091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.407 [2024-11-20 09:16:50.203104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.407 [2024-11-20 09:16:50.203116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.407 [2024-11-20 09:16:50.215860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.407 [2024-11-20 09:16:50.216221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.407 [2024-11-20 09:16:50.216250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.407 [2024-11-20 09:16:50.216266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.407 [2024-11-20 09:16:50.216489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.407 [2024-11-20 09:16:50.216736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.407 [2024-11-20 09:16:50.216756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.407 [2024-11-20 09:16:50.216769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.407 [2024-11-20 09:16:50.216781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.407 5173.25 IOPS, 20.21 MiB/s [2024-11-20T08:16:50.436Z] [2024-11-20 09:16:50.229572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.407 [2024-11-20 09:16:50.229947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.229976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.230000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.230230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.230485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.230512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.230527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.230540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.243237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.243612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.243657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.243672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.243894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.244107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.244127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.244140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.244151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.256988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.257385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.257415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.257430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.257659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.257873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.257892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.257905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.257916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.270474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.270866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.270895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.270911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.271139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.271398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.271420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.271433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.271453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.283962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.284336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.284365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.284380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.284594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.284827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.284847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.284859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.284871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.297381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.297786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.297815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.297830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.298043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.298296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.298326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.298355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.298368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.310743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.311149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.311177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.311193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.311416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.311672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.311703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.311715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.311727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.324226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.324644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.324687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.324702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.324973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.325210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.325231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.325245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.325258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.338003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.338395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.338424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.338441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.338669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.338885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.338904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.338916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.338928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.351603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.351999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.408 [2024-11-20 09:16:50.352028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.408 [2024-11-20 09:16:50.352044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.408 [2024-11-20 09:16:50.352257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.408 [2024-11-20 09:16:50.352484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.408 [2024-11-20 09:16:50.352506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.408 [2024-11-20 09:16:50.352520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.408 [2024-11-20 09:16:50.352532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.408 [2024-11-20 09:16:50.365317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.408 [2024-11-20 09:16:50.365691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.409 [2024-11-20 09:16:50.365720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.409 [2024-11-20 09:16:50.365736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.409 [2024-11-20 09:16:50.365983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.409 [2024-11-20 09:16:50.366219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.409 [2024-11-20 09:16:50.366240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.409 [2024-11-20 09:16:50.366254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.409 [2024-11-20 09:16:50.366266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.409 [2024-11-20 09:16:50.378966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.409 [2024-11-20 09:16:50.379369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.409 [2024-11-20 09:16:50.379398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.409 [2024-11-20 09:16:50.379414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.409 [2024-11-20 09:16:50.379643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.409 [2024-11-20 09:16:50.379863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.409 [2024-11-20 09:16:50.379883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.409 [2024-11-20 09:16:50.379895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.409 [2024-11-20 09:16:50.379907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.409 [2024-11-20 09:16:50.392701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.409 [2024-11-20 09:16:50.393037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.409 [2024-11-20 09:16:50.393064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.409 [2024-11-20 09:16:50.393079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.409 [2024-11-20 09:16:50.393312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.409 [2024-11-20 09:16:50.393554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.409 [2024-11-20 09:16:50.393575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.409 [2024-11-20 09:16:50.393588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.409 [2024-11-20 09:16:50.393600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.409 [2024-11-20 09:16:50.406282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.409 [2024-11-20 09:16:50.406680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.409 [2024-11-20 09:16:50.406709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.409 [2024-11-20 09:16:50.406725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.409 [2024-11-20 09:16:50.406957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.409 [2024-11-20 09:16:50.407188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.409 [2024-11-20 09:16:50.407213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.409 [2024-11-20 09:16:50.407242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.409 [2024-11-20 09:16:50.407255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.409 [2024-11-20 09:16:50.419910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.409 [2024-11-20 09:16:50.420318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.409 [2024-11-20 09:16:50.420347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.409 [2024-11-20 09:16:50.420363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.409 [2024-11-20 09:16:50.420576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.409 [2024-11-20 09:16:50.420808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.409 [2024-11-20 09:16:50.420827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.409 [2024-11-20 09:16:50.420839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.409 [2024-11-20 09:16:50.420850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.668 [2024-11-20 09:16:50.433599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.668 [2024-11-20 09:16:50.433965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-11-20 09:16:50.433994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.668 [2024-11-20 09:16:50.434010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.668 [2024-11-20 09:16:50.434268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.668 [2024-11-20 09:16:50.434510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.668 [2024-11-20 09:16:50.434532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.668 [2024-11-20 09:16:50.434546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.668 [2024-11-20 09:16:50.434558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.668 [2024-11-20 09:16:50.447087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.668 [2024-11-20 09:16:50.447460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-11-20 09:16:50.447488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.668 [2024-11-20 09:16:50.447504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.668 [2024-11-20 09:16:50.447743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.668 [2024-11-20 09:16:50.447936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.668 [2024-11-20 09:16:50.447954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.668 [2024-11-20 09:16:50.447967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.668 [2024-11-20 09:16:50.447983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.668 [2024-11-20 09:16:50.460598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.668 [2024-11-20 09:16:50.460953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-11-20 09:16:50.460981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.668 [2024-11-20 09:16:50.460997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.668 [2024-11-20 09:16:50.461203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.668 [2024-11-20 09:16:50.461467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.668 [2024-11-20 09:16:50.461490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.668 [2024-11-20 09:16:50.461503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.668 [2024-11-20 09:16:50.461515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.668 [2024-11-20 09:16:50.473911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.668 [2024-11-20 09:16:50.474239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-11-20 09:16:50.474266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.668 [2024-11-20 09:16:50.474281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.668 [2024-11-20 09:16:50.474521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.668 [2024-11-20 09:16:50.474751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.668 [2024-11-20 09:16:50.474789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.668 [2024-11-20 09:16:50.474802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.668 [2024-11-20 09:16:50.474815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.668 [2024-11-20 09:16:50.487429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.668 [2024-11-20 09:16:50.487787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-11-20 09:16:50.487814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.668 [2024-11-20 09:16:50.487829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.668 [2024-11-20 09:16:50.488043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.668 [2024-11-20 09:16:50.488253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.668 [2024-11-20 09:16:50.488273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.668 [2024-11-20 09:16:50.488300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.668 [2024-11-20 09:16:50.488324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.668 [2024-11-20 09:16:50.500820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.668 [2024-11-20 09:16:50.501206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-11-20 09:16:50.501235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.668 [2024-11-20 09:16:50.501251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.668 [2024-11-20 09:16:50.501488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.668 [2024-11-20 09:16:50.501743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.668 [2024-11-20 09:16:50.501764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.668 [2024-11-20 09:16:50.501777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.668 [2024-11-20 09:16:50.501790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.668 [2024-11-20 09:16:50.513968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.668 [2024-11-20 09:16:50.514417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.668 [2024-11-20 09:16:50.514445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.668 [2024-11-20 09:16:50.514461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.668 [2024-11-20 09:16:50.514689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.668 [2024-11-20 09:16:50.514914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.668 [2024-11-20 09:16:50.514933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.668 [2024-11-20 09:16:50.514961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.668 [2024-11-20 09:16:50.514972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.668 [2024-11-20 09:16:50.527350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.668 [2024-11-20 09:16:50.527713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.527740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.527756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.527990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.528198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.528216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.528228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.528239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.540461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.669 [2024-11-20 09:16:50.540807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.540833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.540848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.541068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.541277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.541296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.541333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.541346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.553532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.669 [2024-11-20 09:16:50.553959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.553986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.554016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.554257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.554497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.554519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.554531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.554543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.566730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.669 [2024-11-20 09:16:50.567048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.567075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.567089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.567288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.567512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.567532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.567545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.567556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.579789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.669 [2024-11-20 09:16:50.580119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.580195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.580211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.580474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.580692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.580715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.580728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.580740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.592774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.669 [2024-11-20 09:16:50.593130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.593197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.593212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.593464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.593704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.593722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.593734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.593745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.605871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.669 [2024-11-20 09:16:50.606169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.606210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.606225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.606487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.606722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.606740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.606752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.606763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.618868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.669 [2024-11-20 09:16:50.619334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.619379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.619394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.619658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.619851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.619869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.619881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.619897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.632030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.669 [2024-11-20 09:16:50.632408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.632435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.632449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.632664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.632873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.632891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.632903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.632914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.645115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.669 [2024-11-20 09:16:50.645481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.669 [2024-11-20 09:16:50.645523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.669 [2024-11-20 09:16:50.645538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.669 [2024-11-20 09:16:50.645786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.669 [2024-11-20 09:16:50.645984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.669 [2024-11-20 09:16:50.646003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.669 [2024-11-20 09:16:50.646015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.669 [2024-11-20 09:16:50.646027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.669 [2024-11-20 09:16:50.658340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.670 [2024-11-20 09:16:50.658736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.670 [2024-11-20 09:16:50.658763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.670 [2024-11-20 09:16:50.658779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.670 [2024-11-20 09:16:50.658999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.670 [2024-11-20 09:16:50.659207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.670 [2024-11-20 09:16:50.659226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.670 [2024-11-20 09:16:50.659237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.670 [2024-11-20 09:16:50.659249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.670 [2024-11-20 09:16:50.671465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.670 [2024-11-20 09:16:50.671963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.670 [2024-11-20 09:16:50.672005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.670 [2024-11-20 09:16:50.672021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.670 [2024-11-20 09:16:50.672271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.670 [2024-11-20 09:16:50.672518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.670 [2024-11-20 09:16:50.672539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.670 [2024-11-20 09:16:50.672552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.670 [2024-11-20 09:16:50.672564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.670 [2024-11-20 09:16:50.684479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.670 [2024-11-20 09:16:50.684840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.670 [2024-11-20 09:16:50.684867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.670 [2024-11-20 09:16:50.684882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.670 [2024-11-20 09:16:50.685117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.670 [2024-11-20 09:16:50.685351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.670 [2024-11-20 09:16:50.685372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.670 [2024-11-20 09:16:50.685385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.670 [2024-11-20 09:16:50.685397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.945 [2024-11-20 09:16:50.697821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.945 [2024-11-20 09:16:50.698204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.945 [2024-11-20 09:16:50.698232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.945 [2024-11-20 09:16:50.698247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.945 [2024-11-20 09:16:50.698497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.945 [2024-11-20 09:16:50.698730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.945 [2024-11-20 09:16:50.698751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.945 [2024-11-20 09:16:50.698765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.945 [2024-11-20 09:16:50.698777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.945 [2024-11-20 09:16:50.710839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.945 [2024-11-20 09:16:50.711329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.945 [2024-11-20 09:16:50.711375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.945 [2024-11-20 09:16:50.711390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.945 [2024-11-20 09:16:50.711656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.945 [2024-11-20 09:16:50.711848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.945 [2024-11-20 09:16:50.711866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.945 [2024-11-20 09:16:50.711878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.945 [2024-11-20 09:16:50.711889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.945 [2024-11-20 09:16:50.723852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.945 [2024-11-20 09:16:50.724328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.945 [2024-11-20 09:16:50.724356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.945 [2024-11-20 09:16:50.724386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.945 [2024-11-20 09:16:50.724616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.946 [2024-11-20 09:16:50.724848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.946 [2024-11-20 09:16:50.724868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.946 [2024-11-20 09:16:50.724880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.946 [2024-11-20 09:16:50.724892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.946 [2024-11-20 09:16:50.737079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.946 [2024-11-20 09:16:50.737468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.946 [2024-11-20 09:16:50.737497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.946 [2024-11-20 09:16:50.737513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.946 [2024-11-20 09:16:50.737752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.946 [2024-11-20 09:16:50.737960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.946 [2024-11-20 09:16:50.737979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.946 [2024-11-20 09:16:50.737991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.946 [2024-11-20 09:16:50.738002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.946 [2024-11-20 09:16:50.750349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.946 [2024-11-20 09:16:50.750708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.946 [2024-11-20 09:16:50.750734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.947 [2024-11-20 09:16:50.750749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.947 [2024-11-20 09:16:50.750964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.947 [2024-11-20 09:16:50.751173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.947 [2024-11-20 09:16:50.751196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.947 [2024-11-20 09:16:50.751210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.947 [2024-11-20 09:16:50.751221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.947 [2024-11-20 09:16:50.763605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.947 [2024-11-20 09:16:50.763920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.947 [2024-11-20 09:16:50.763947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.947 [2024-11-20 09:16:50.763962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.947 [2024-11-20 09:16:50.764176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.947 [2024-11-20 09:16:50.764431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.947 [2024-11-20 09:16:50.764452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.947 [2024-11-20 09:16:50.764464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.947 [2024-11-20 09:16:50.764476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.947 [2024-11-20 09:16:50.776668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.947 [2024-11-20 09:16:50.777026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.947 [2024-11-20 09:16:50.777052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.947 [2024-11-20 09:16:50.777067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.948 [2024-11-20 09:16:50.777282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.948 [2024-11-20 09:16:50.777522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.948 [2024-11-20 09:16:50.777542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.948 [2024-11-20 09:16:50.777554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.948 [2024-11-20 09:16:50.777566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.948 [2024-11-20 09:16:50.789696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.948 [2024-11-20 09:16:50.790082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.948 [2024-11-20 09:16:50.790124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.948 [2024-11-20 09:16:50.790140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.948 [2024-11-20 09:16:50.790401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.948 [2024-11-20 09:16:50.790626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.948 [2024-11-20 09:16:50.790645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.948 [2024-11-20 09:16:50.790671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.948 [2024-11-20 09:16:50.790687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.948 [2024-11-20 09:16:50.802714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.948 [2024-11-20 09:16:50.803120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.948 [2024-11-20 09:16:50.803148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.948 [2024-11-20 09:16:50.803164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.948 [2024-11-20 09:16:50.803418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.950 [2024-11-20 09:16:50.803654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.950 [2024-11-20 09:16:50.803673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.950 [2024-11-20 09:16:50.803685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.950 [2024-11-20 09:16:50.803696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.950 [2024-11-20 09:16:50.815825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.950 [2024-11-20 09:16:50.816186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.950 [2024-11-20 09:16:50.816228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.950 [2024-11-20 09:16:50.816242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.950 [2024-11-20 09:16:50.816493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.950 [2024-11-20 09:16:50.816728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.950 [2024-11-20 09:16:50.816747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.950 [2024-11-20 09:16:50.816759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.950 [2024-11-20 09:16:50.816771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.950 [2024-11-20 09:16:50.828845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.950 [2024-11-20 09:16:50.829271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.950 [2024-11-20 09:16:50.829320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.950 [2024-11-20 09:16:50.829338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.950 [2024-11-20 09:16:50.829603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.950 [2024-11-20 09:16:50.829796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.951 [2024-11-20 09:16:50.829814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.951 [2024-11-20 09:16:50.829826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.951 [2024-11-20 09:16:50.829837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.951 [2024-11-20 09:16:50.841869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.951 [2024-11-20 09:16:50.842240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.951 [2024-11-20 09:16:50.842283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.951 [2024-11-20 09:16:50.842298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.951 [2024-11-20 09:16:50.842561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.951 [2024-11-20 09:16:50.842771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.951 [2024-11-20 09:16:50.842789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.951 [2024-11-20 09:16:50.842801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.952 [2024-11-20 09:16:50.842812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.952 [2024-11-20 09:16:50.855156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.952 [2024-11-20 09:16:50.855549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.952 [2024-11-20 09:16:50.855578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.952 [2024-11-20 09:16:50.855593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.952 [2024-11-20 09:16:50.855832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.952 [2024-11-20 09:16:50.856042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.952 [2024-11-20 09:16:50.856061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.952 [2024-11-20 09:16:50.856073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.952 [2024-11-20 09:16:50.856084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.952 [2024-11-20 09:16:50.868387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.952 [2024-11-20 09:16:50.868817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.952 [2024-11-20 09:16:50.868844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.952 [2024-11-20 09:16:50.868873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.952 [2024-11-20 09:16:50.869121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.952 [2024-11-20 09:16:50.869323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.952 [2024-11-20 09:16:50.869343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.952 [2024-11-20 09:16:50.869354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.952 [2024-11-20 09:16:50.869365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.952 [2024-11-20 09:16:50.881499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.952 [2024-11-20 09:16:50.881828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.953 [2024-11-20 09:16:50.881854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.953 [2024-11-20 09:16:50.881869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.953 [2024-11-20 09:16:50.882096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.953 [2024-11-20 09:16:50.882313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.953 [2024-11-20 09:16:50.882333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.953 [2024-11-20 09:16:50.882345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.953 [2024-11-20 09:16:50.882356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.953 [2024-11-20 09:16:50.894741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.953 [2024-11-20 09:16:50.895104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.953 [2024-11-20 09:16:50.895131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.953 [2024-11-20 09:16:50.895147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.953 [2024-11-20 09:16:50.895394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.953 [2024-11-20 09:16:50.895593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.953 [2024-11-20 09:16:50.895611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.953 [2024-11-20 09:16:50.895623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.953 [2024-11-20 09:16:50.895635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.953 [2024-11-20 09:16:50.907845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.953 [2024-11-20 09:16:50.908348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.953 [2024-11-20 09:16:50.908390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.953 [2024-11-20 09:16:50.908407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.954 [2024-11-20 09:16:50.908644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.954 [2024-11-20 09:16:50.908851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.954 [2024-11-20 09:16:50.908870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.954 [2024-11-20 09:16:50.908882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.954 [2024-11-20 09:16:50.908893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.954 [2024-11-20 09:16:50.920872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.954 [2024-11-20 09:16:50.921317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.954 [2024-11-20 09:16:50.921364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.954 [2024-11-20 09:16:50.921378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.954 [2024-11-20 09:16:50.921626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.954 [2024-11-20 09:16:50.921835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.954 [2024-11-20 09:16:50.921858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.954 [2024-11-20 09:16:50.921871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.954 [2024-11-20 09:16:50.921882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.954 [2024-11-20 09:16:50.933849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.954 [2024-11-20 09:16:50.934351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.954 [2024-11-20 09:16:50.934379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.954 [2024-11-20 09:16:50.934393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.954 [2024-11-20 09:16:50.934654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.954 [2024-11-20 09:16:50.934847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.954 [2024-11-20 09:16:50.934866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.954 [2024-11-20 09:16:50.934878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.954 [2024-11-20 09:16:50.934889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.955 [2024-11-20 09:16:50.946862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.955 [2024-11-20 09:16:50.947226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:13.955 [2024-11-20 09:16:50.947253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:13.955 [2024-11-20 09:16:50.947268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:13.955 [2024-11-20 09:16:50.947497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:13.955 [2024-11-20 09:16:50.947741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:13.955 [2024-11-20 09:16:50.947760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:13.955 [2024-11-20 09:16:50.947772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:13.955 [2024-11-20 09:16:50.947784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:13.955 [2024-11-20 09:16:50.960479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.213 [2024-11-20 09:16:50.960843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.213 [2024-11-20 09:16:50.960886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.213 [2024-11-20 09:16:50.960902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.213 [2024-11-20 09:16:50.961143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.213 [2024-11-20 09:16:50.961394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.213 [2024-11-20 09:16:50.961416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.213 [2024-11-20 09:16:50.961429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.213 [2024-11-20 09:16:50.961446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.213 [2024-11-20 09:16:50.973591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.213 [2024-11-20 09:16:50.973954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.213 [2024-11-20 09:16:50.973997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.213 [2024-11-20 09:16:50.974013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.213 [2024-11-20 09:16:50.974264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.213 [2024-11-20 09:16:50.974504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.213 [2024-11-20 09:16:50.974525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.213 [2024-11-20 09:16:50.974537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.213 [2024-11-20 09:16:50.974549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.213 [2024-11-20 09:16:50.987168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.213 [2024-11-20 09:16:50.987509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.213 [2024-11-20 09:16:50.987538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.213 [2024-11-20 09:16:50.987553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.213 [2024-11-20 09:16:50.987782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.213 [2024-11-20 09:16:50.987989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.213 [2024-11-20 09:16:50.988008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.213 [2024-11-20 09:16:50.988020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.213 [2024-11-20 09:16:50.988031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.213 [2024-11-20 09:16:51.000387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.213 [2024-11-20 09:16:51.000750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.213 [2024-11-20 09:16:51.000791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.213 [2024-11-20 09:16:51.000806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.213 [2024-11-20 09:16:51.001055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.213 [2024-11-20 09:16:51.001262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.213 [2024-11-20 09:16:51.001280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.213 [2024-11-20 09:16:51.001293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.213 [2024-11-20 09:16:51.001329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.213 [2024-11-20 09:16:51.013494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.213 [2024-11-20 09:16:51.013971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.213 [2024-11-20 09:16:51.014026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.213 [2024-11-20 09:16:51.014041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.213 [2024-11-20 09:16:51.014311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.213 [2024-11-20 09:16:51.014541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.213 [2024-11-20 09:16:51.014561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.213 [2024-11-20 09:16:51.014574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.213 [2024-11-20 09:16:51.014586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.213 [2024-11-20 09:16:51.026510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.213 [2024-11-20 09:16:51.026905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.213 [2024-11-20 09:16:51.026932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.213 [2024-11-20 09:16:51.026948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.213 [2024-11-20 09:16:51.027169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.213 [2024-11-20 09:16:51.027424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.213 [2024-11-20 09:16:51.027444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.213 [2024-11-20 09:16:51.027456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.213 [2024-11-20 09:16:51.027468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.213 [2024-11-20 09:16:51.039508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.213 [2024-11-20 09:16:51.039841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.213 [2024-11-20 09:16:51.039869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.213 [2024-11-20 09:16:51.039884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.213 [2024-11-20 09:16:51.040104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.213 [2024-11-20 09:16:51.040337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.213 [2024-11-20 09:16:51.040357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.040369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.040381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.052566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.052956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.052984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.052999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.214 [2024-11-20 09:16:51.053234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.214 [2024-11-20 09:16:51.053510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.214 [2024-11-20 09:16:51.053532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.053545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.053557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.065607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.065971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.066013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.066028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.214 [2024-11-20 09:16:51.066279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.214 [2024-11-20 09:16:51.066525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.214 [2024-11-20 09:16:51.066547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.066560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.066572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.078767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.079129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.079156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.079172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.214 [2024-11-20 09:16:51.079419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.214 [2024-11-20 09:16:51.079646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.214 [2024-11-20 09:16:51.079666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.079678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.079705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.091822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.092248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.092275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.092290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.214 [2024-11-20 09:16:51.092520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.214 [2024-11-20 09:16:51.092732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.214 [2024-11-20 09:16:51.092756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.092769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.092780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.105044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.105428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.105457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.105472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.214 [2024-11-20 09:16:51.105715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.214 [2024-11-20 09:16:51.105924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.214 [2024-11-20 09:16:51.105942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.105953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.105964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.118053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.118493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.118521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.118536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.214 [2024-11-20 09:16:51.118770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.214 [2024-11-20 09:16:51.118979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.214 [2024-11-20 09:16:51.118998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.119011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.119022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.131166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.131538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.131581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.131596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.214 [2024-11-20 09:16:51.131863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.214 [2024-11-20 09:16:51.132056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.214 [2024-11-20 09:16:51.132074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.132086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.132102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.144188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.144561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.144604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.144620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.214 [2024-11-20 09:16:51.144873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.214 [2024-11-20 09:16:51.145080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.214 [2024-11-20 09:16:51.145098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.145110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.145121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.157258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.157695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.157723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.157753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.214 [2024-11-20 09:16:51.157993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.214 [2024-11-20 09:16:51.158185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.214 [2024-11-20 09:16:51.158203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.214 [2024-11-20 09:16:51.158215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.214 [2024-11-20 09:16:51.158226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.214 [2024-11-20 09:16:51.170336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.214 [2024-11-20 09:16:51.170729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.214 [2024-11-20 09:16:51.170757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.214 [2024-11-20 09:16:51.170772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.215 [2024-11-20 09:16:51.170994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.215 [2024-11-20 09:16:51.171219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.215 [2024-11-20 09:16:51.171238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.215 [2024-11-20 09:16:51.171250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.215 [2024-11-20 09:16:51.171260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.215 [2024-11-20 09:16:51.183389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.215 [2024-11-20 09:16:51.183708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.215 [2024-11-20 09:16:51.183734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.215 [2024-11-20 09:16:51.183748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.215 [2024-11-20 09:16:51.183948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.215 [2024-11-20 09:16:51.184173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.215 [2024-11-20 09:16:51.184192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.215 [2024-11-20 09:16:51.184204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.215 [2024-11-20 09:16:51.184215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.215 [2024-11-20 09:16:51.196394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.215 [2024-11-20 09:16:51.196706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.215 [2024-11-20 09:16:51.196746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.215 [2024-11-20 09:16:51.196760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.215 [2024-11-20 09:16:51.196959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.215 [2024-11-20 09:16:51.197184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.215 [2024-11-20 09:16:51.197203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.215 [2024-11-20 09:16:51.197215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.215 [2024-11-20 09:16:51.197226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.215 [2024-11-20 09:16:51.209415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.215 [2024-11-20 09:16:51.209778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.215 [2024-11-20 09:16:51.209821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.215 [2024-11-20 09:16:51.209837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.215 [2024-11-20 09:16:51.210105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.215 [2024-11-20 09:16:51.210325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.215 [2024-11-20 09:16:51.210345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.215 [2024-11-20 09:16:51.210372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.215 [2024-11-20 09:16:51.210384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.215 4138.60 IOPS, 16.17 MiB/s [2024-11-20T08:16:51.244Z] [2024-11-20 09:16:51.223615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.215 [2024-11-20 09:16:51.223994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.215 [2024-11-20 09:16:51.224021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.215 [2024-11-20 09:16:51.224040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.215 [2024-11-20 09:16:51.224256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.215 [2024-11-20 09:16:51.224497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.215 [2024-11-20 09:16:51.224518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.215 [2024-11-20 09:16:51.224531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.215 [2024-11-20 09:16:51.224542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.215 [2024-11-20 09:16:51.237066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.215 [2024-11-20 09:16:51.237414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.215 [2024-11-20 09:16:51.237443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.215 [2024-11-20 09:16:51.237459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.215 [2024-11-20 09:16:51.237672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.215 [2024-11-20 09:16:51.237891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.215 [2024-11-20 09:16:51.237912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.215 [2024-11-20 09:16:51.237926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.215 [2024-11-20 09:16:51.237938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.475 [2024-11-20 09:16:51.250442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.475 [2024-11-20 09:16:51.250871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.475 [2024-11-20 09:16:51.250898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.475 [2024-11-20 09:16:51.250912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.475 [2024-11-20 09:16:51.251131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.475 [2024-11-20 09:16:51.251366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.475 [2024-11-20 09:16:51.251387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.475 [2024-11-20 09:16:51.251400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.475 [2024-11-20 09:16:51.251412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.475 [2024-11-20 09:16:51.263665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.475 [2024-11-20 09:16:51.264028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.475 [2024-11-20 09:16:51.264071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.475 [2024-11-20 09:16:51.264086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.475 [2024-11-20 09:16:51.264349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.475 [2024-11-20 09:16:51.264548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.475 [2024-11-20 09:16:51.264572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.475 [2024-11-20 09:16:51.264585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.475 [2024-11-20 09:16:51.264611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.475 [2024-11-20 09:16:51.276702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.475 [2024-11-20 09:16:51.277127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.475 [2024-11-20 09:16:51.277154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.475 [2024-11-20 09:16:51.277169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.475 [2024-11-20 09:16:51.277416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.475 [2024-11-20 09:16:51.277652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.475 [2024-11-20 09:16:51.277672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.475 [2024-11-20 09:16:51.277684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.475 [2024-11-20 09:16:51.277695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.475 [2024-11-20 09:16:51.289825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.475 [2024-11-20 09:16:51.290217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.475 [2024-11-20 09:16:51.290245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.475 [2024-11-20 09:16:51.290259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.475 [2024-11-20 09:16:51.290512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.475 [2024-11-20 09:16:51.290757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.475 [2024-11-20 09:16:51.290776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.475 [2024-11-20 09:16:51.290787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.475 [2024-11-20 09:16:51.290798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.475 [2024-11-20 09:16:51.302831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.475 [2024-11-20 09:16:51.303231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.475 [2024-11-20 09:16:51.303301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.475 [2024-11-20 09:16:51.303327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.475 [2024-11-20 09:16:51.303593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.475 [2024-11-20 09:16:51.303802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.475 [2024-11-20 09:16:51.303821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.303833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.303851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.316066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.316465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.476 [2024-11-20 09:16:51.316493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.476 [2024-11-20 09:16:51.316509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.476 [2024-11-20 09:16:51.316752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.476 [2024-11-20 09:16:51.316960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.476 [2024-11-20 09:16:51.316979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.316991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.317002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.329099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.329466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.476 [2024-11-20 09:16:51.329508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.476 [2024-11-20 09:16:51.329523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.476 [2024-11-20 09:16:51.329769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.476 [2024-11-20 09:16:51.329976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.476 [2024-11-20 09:16:51.329994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.330006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.330017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.342163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.342532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.476 [2024-11-20 09:16:51.342558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.476 [2024-11-20 09:16:51.342573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.476 [2024-11-20 09:16:51.342787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.476 [2024-11-20 09:16:51.342995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.476 [2024-11-20 09:16:51.343013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.343025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.343036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.355370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.355753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.476 [2024-11-20 09:16:51.355793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.476 [2024-11-20 09:16:51.355808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.476 [2024-11-20 09:16:51.356007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.476 [2024-11-20 09:16:51.356233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.476 [2024-11-20 09:16:51.356252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.356264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.356275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.368509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.369014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.476 [2024-11-20 09:16:51.369055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.476 [2024-11-20 09:16:51.369071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.476 [2024-11-20 09:16:51.369330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.476 [2024-11-20 09:16:51.369529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.476 [2024-11-20 09:16:51.369550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.369563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.369574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.381681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.382019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.476 [2024-11-20 09:16:51.382085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.476 [2024-11-20 09:16:51.382101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.476 [2024-11-20 09:16:51.382338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.476 [2024-11-20 09:16:51.382552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.476 [2024-11-20 09:16:51.382573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.382586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.382599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.394814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.395208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.476 [2024-11-20 09:16:51.395280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.476 [2024-11-20 09:16:51.395295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.476 [2024-11-20 09:16:51.395565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.476 [2024-11-20 09:16:51.395776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.476 [2024-11-20 09:16:51.395794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.395806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.395817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.408023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.408387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.476 [2024-11-20 09:16:51.408454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.476 [2024-11-20 09:16:51.408494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.476 [2024-11-20 09:16:51.408728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.476 [2024-11-20 09:16:51.408921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.476 [2024-11-20 09:16:51.408939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.408951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.408962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.421369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.421740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.476 [2024-11-20 09:16:51.421768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.476 [2024-11-20 09:16:51.421784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.476 [2024-11-20 09:16:51.422021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.476 [2024-11-20 09:16:51.422229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.476 [2024-11-20 09:16:51.422247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.476 [2024-11-20 09:16:51.422259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.476 [2024-11-20 09:16:51.422270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.476 [2024-11-20 09:16:51.434517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.476 [2024-11-20 09:16:51.435007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.477 [2024-11-20 09:16:51.435062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.477 [2024-11-20 09:16:51.435077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.477 [2024-11-20 09:16:51.435348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.477 [2024-11-20 09:16:51.435554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.477 [2024-11-20 09:16:51.435578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.477 [2024-11-20 09:16:51.435592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.477 [2024-11-20 09:16:51.435604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.477 [2024-11-20 09:16:51.448092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.477 [2024-11-20 09:16:51.448489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.477 [2024-11-20 09:16:51.448518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.477 [2024-11-20 09:16:51.448534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.477 [2024-11-20 09:16:51.448763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.477 [2024-11-20 09:16:51.448977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.477 [2024-11-20 09:16:51.448997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.477 [2024-11-20 09:16:51.449010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.477 [2024-11-20 09:16:51.449022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.477 [2024-11-20 09:16:51.461513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.477 [2024-11-20 09:16:51.461906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.477 [2024-11-20 09:16:51.461934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.477 [2024-11-20 09:16:51.461950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.477 [2024-11-20 09:16:51.462191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.477 [2024-11-20 09:16:51.462426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.477 [2024-11-20 09:16:51.462448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.477 [2024-11-20 09:16:51.462462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.477 [2024-11-20 09:16:51.462475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.477 [2024-11-20 09:16:51.474956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.477 [2024-11-20 09:16:51.475311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.477 [2024-11-20 09:16:51.475357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.477 [2024-11-20 09:16:51.475376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.477 [2024-11-20 09:16:51.475604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.477 [2024-11-20 09:16:51.475859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.477 [2024-11-20 09:16:51.475881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.477 [2024-11-20 09:16:51.475895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.477 [2024-11-20 09:16:51.475913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.477 [2024-11-20 09:16:51.488275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.477 [2024-11-20 09:16:51.488719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.477 [2024-11-20 09:16:51.488747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.477 [2024-11-20 09:16:51.488762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.477 [2024-11-20 09:16:51.488986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.477 [2024-11-20 09:16:51.489199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.477 [2024-11-20 09:16:51.489220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.477 [2024-11-20 09:16:51.489232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.477 [2024-11-20 09:16:51.489243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.737 [2024-11-20 09:16:51.502140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.737 [2024-11-20 09:16:51.502502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.737 [2024-11-20 09:16:51.502531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.737 [2024-11-20 09:16:51.502547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.737 [2024-11-20 09:16:51.502798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.737 [2024-11-20 09:16:51.502997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.737 [2024-11-20 09:16:51.503017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.737 [2024-11-20 09:16:51.503030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.737 [2024-11-20 09:16:51.503041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.737 [2024-11-20 09:16:51.515502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.737 [2024-11-20 09:16:51.515952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.737 [2024-11-20 09:16:51.515981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.737 [2024-11-20 09:16:51.515997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.737 [2024-11-20 09:16:51.516239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.737 [2024-11-20 09:16:51.516480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.737 [2024-11-20 09:16:51.516502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.737 [2024-11-20 09:16:51.516516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.737 [2024-11-20 09:16:51.516530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.737 [2024-11-20 09:16:51.528880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.737 [2024-11-20 09:16:51.529336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.737 [2024-11-20 09:16:51.529365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.737 [2024-11-20 09:16:51.529381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.737 [2024-11-20 09:16:51.529596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.737 [2024-11-20 09:16:51.529809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.737 [2024-11-20 09:16:51.529828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.737 [2024-11-20 09:16:51.529841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.737 [2024-11-20 09:16:51.529852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.737 [2024-11-20 09:16:51.542231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.737 [2024-11-20 09:16:51.542619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.737 [2024-11-20 09:16:51.542649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.737 [2024-11-20 09:16:51.542664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.737 [2024-11-20 09:16:51.542893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.737 [2024-11-20 09:16:51.543115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.737 [2024-11-20 09:16:51.543135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.737 [2024-11-20 09:16:51.543147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.737 [2024-11-20 09:16:51.543159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.737 [2024-11-20 09:16:51.555664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.737 [2024-11-20 09:16:51.555987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.737 [2024-11-20 09:16:51.556015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.737 [2024-11-20 09:16:51.556031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.737 [2024-11-20 09:16:51.556259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.737 [2024-11-20 09:16:51.556488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.556509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.556522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.556534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 [2024-11-20 09:16:51.568992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.738 [2024-11-20 09:16:51.569405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.738 [2024-11-20 09:16:51.569434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.738 [2024-11-20 09:16:51.569450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.738 [2024-11-20 09:16:51.569696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.738 [2024-11-20 09:16:51.569896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.569915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.569928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.569939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 [2024-11-20 09:16:51.582206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.738 [2024-11-20 09:16:51.582606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.738 [2024-11-20 09:16:51.582634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.738 [2024-11-20 09:16:51.582650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.738 [2024-11-20 09:16:51.582891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.738 [2024-11-20 09:16:51.583106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.583125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.583138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.583149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 [2024-11-20 09:16:51.595440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.738 [2024-11-20 09:16:51.595800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.738 [2024-11-20 09:16:51.595828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.738 [2024-11-20 09:16:51.595845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.738 [2024-11-20 09:16:51.596073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.738 [2024-11-20 09:16:51.596314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.596335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.596363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.596377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 [2024-11-20 09:16:51.608890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.738 [2024-11-20 09:16:51.609267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.738 [2024-11-20 09:16:51.609295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.738 [2024-11-20 09:16:51.609323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.738 [2024-11-20 09:16:51.609538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.738 [2024-11-20 09:16:51.609781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.609806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.609819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.609831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 [2024-11-20 09:16:51.622248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.738 [2024-11-20 09:16:51.622622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.738 [2024-11-20 09:16:51.622650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.738 [2024-11-20 09:16:51.622665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.738 [2024-11-20 09:16:51.622894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.738 [2024-11-20 09:16:51.623116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.623136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.623149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.623161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 [2024-11-20 09:16:51.635687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.738 [2024-11-20 09:16:51.636087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.738 [2024-11-20 09:16:51.636114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.738 [2024-11-20 09:16:51.636145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.738 [2024-11-20 09:16:51.636389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.738 [2024-11-20 09:16:51.636625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.636645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.636657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.636668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 [2024-11-20 09:16:51.649023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.738 [2024-11-20 09:16:51.649420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.738 [2024-11-20 09:16:51.649450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.738 [2024-11-20 09:16:51.649466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.738 [2024-11-20 09:16:51.649695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.738 [2024-11-20 09:16:51.649929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.649948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.649961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.649979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 [2024-11-20 09:16:51.662345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.738 [2024-11-20 09:16:51.662727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.738 [2024-11-20 09:16:51.662756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.738 [2024-11-20 09:16:51.662772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.738 [2024-11-20 09:16:51.663000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.738 [2024-11-20 09:16:51.663214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.663233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.663245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.663256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 [2024-11-20 09:16:51.675563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.738 [2024-11-20 09:16:51.675861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.738 [2024-11-20 09:16:51.675904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.738 [2024-11-20 09:16:51.675918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.738 [2024-11-20 09:16:51.676141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.738 [2024-11-20 09:16:51.676383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.738 [2024-11-20 09:16:51.676404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.738 [2024-11-20 09:16:51.676416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.738 [2024-11-20 09:16:51.676428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3437540 Killed "${NVMF_APP[@]}" "$@" 00:26:14.738 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:14.738 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:14.738 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:14.738 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:14.738 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.739 [2024-11-20 09:16:51.688932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.739 [2024-11-20 09:16:51.689287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.739 [2024-11-20 09:16:51.689323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.739 [2024-11-20 09:16:51.689340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.739 [2024-11-20 09:16:51.689553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.739 [2024-11-20 09:16:51.689804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.739 [2024-11-20 09:16:51.689829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.739 [2024-11-20 09:16:51.689843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.739 [2024-11-20 09:16:51.689855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.739 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3438501 00:26:14.739 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:14.739 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3438501 00:26:14.739 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3438501 ']' 00:26:14.739 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.739 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:14.739 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.739 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:14.739 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:14.739 [2024-11-20 09:16:51.702309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.739 [2024-11-20 09:16:51.702679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.739 [2024-11-20 09:16:51.702706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.739 [2024-11-20 09:16:51.702722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.739 [2024-11-20 09:16:51.702951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.739 [2024-11-20 09:16:51.703164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.739 [2024-11-20 09:16:51.703183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.739 [2024-11-20 09:16:51.703197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.739 [2024-11-20 09:16:51.703209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.739 [2024-11-20 09:16:51.715696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.739 [2024-11-20 09:16:51.716069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.739 [2024-11-20 09:16:51.716098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.739 [2024-11-20 09:16:51.716113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.739 [2024-11-20 09:16:51.716351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.739 [2024-11-20 09:16:51.716563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.739 [2024-11-20 09:16:51.716583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.739 [2024-11-20 09:16:51.716596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.739 [2024-11-20 09:16:51.716608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.739 [2024-11-20 09:16:51.728937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.739 [2024-11-20 09:16:51.729324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.739 [2024-11-20 09:16:51.729353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.739 [2024-11-20 09:16:51.729369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.739 [2024-11-20 09:16:51.729597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.739 [2024-11-20 09:16:51.729835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.739 [2024-11-20 09:16:51.729871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.739 [2024-11-20 09:16:51.729885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.739 [2024-11-20 09:16:51.729898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.739 [2024-11-20 09:16:51.740827] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:26:14.739 [2024-11-20 09:16:51.740906] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.739 [2024-11-20 09:16:51.742418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.739 [2024-11-20 09:16:51.742887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.739 [2024-11-20 09:16:51.742915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.739 [2024-11-20 09:16:51.742947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.739 [2024-11-20 09:16:51.743197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.739 [2024-11-20 09:16:51.743443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.739 [2024-11-20 09:16:51.743465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.739 [2024-11-20 09:16:51.743479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.739 [2024-11-20 09:16:51.743492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.739 [2024-11-20 09:16:51.755836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.739 [2024-11-20 09:16:51.756273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.739 [2024-11-20 09:16:51.756310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.739 [2024-11-20 09:16:51.756329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.739 [2024-11-20 09:16:51.756558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.739 [2024-11-20 09:16:51.756790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.739 [2024-11-20 09:16:51.756810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.739 [2024-11-20 09:16:51.756822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.739 [2024-11-20 09:16:51.756834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.999 [2024-11-20 09:16:51.769408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.999 [2024-11-20 09:16:51.769775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.999 [2024-11-20 09:16:51.769804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.999 [2024-11-20 09:16:51.769820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.999 [2024-11-20 09:16:51.770047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.999 [2024-11-20 09:16:51.770264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.999 [2024-11-20 09:16:51.770299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.999 [2024-11-20 09:16:51.770322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.999 [2024-11-20 09:16:51.770335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.999 [2024-11-20 09:16:51.782857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.999 [2024-11-20 09:16:51.783210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.999 [2024-11-20 09:16:51.783238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.999 [2024-11-20 09:16:51.783254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.999 [2024-11-20 09:16:51.783477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.999 [2024-11-20 09:16:51.783705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.999 [2024-11-20 09:16:51.783724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.999 [2024-11-20 09:16:51.783737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.999 [2024-11-20 09:16:51.783749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.999 [2024-11-20 09:16:51.796134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.999 [2024-11-20 09:16:51.796489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.999 [2024-11-20 09:16:51.796518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.999 [2024-11-20 09:16:51.796534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.999 [2024-11-20 09:16:51.796765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.999 [2024-11-20 09:16:51.796986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.999 [2024-11-20 09:16:51.797006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.999 [2024-11-20 09:16:51.797018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.999 [2024-11-20 09:16:51.797030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.999 [2024-11-20 09:16:51.809538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.999 [2024-11-20 09:16:51.809982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.999 [2024-11-20 09:16:51.810016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.999 [2024-11-20 09:16:51.810032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.999 [2024-11-20 09:16:51.810260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.999 [2024-11-20 09:16:51.810509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.999 [2024-11-20 09:16:51.810531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.999 [2024-11-20 09:16:51.810544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.999 [2024-11-20 09:16:51.810556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.999 [2024-11-20 09:16:51.818785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:14.999 [2024-11-20 09:16:51.822863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.999 [2024-11-20 09:16:51.823290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.999 [2024-11-20 09:16:51.823326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.999 [2024-11-20 09:16:51.823343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.999 [2024-11-20 09:16:51.823557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.999 [2024-11-20 09:16:51.823797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.999 [2024-11-20 09:16:51.823818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.999 [2024-11-20 09:16:51.823831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.999 [2024-11-20 09:16:51.823843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.999 [2024-11-20 09:16:51.836221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.999 [2024-11-20 09:16:51.836736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.999 [2024-11-20 09:16:51.836775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.999 [2024-11-20 09:16:51.836795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:14.999 [2024-11-20 09:16:51.837042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:14.999 [2024-11-20 09:16:51.837252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:14.999 [2024-11-20 09:16:51.837272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:14.999 [2024-11-20 09:16:51.837313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:14.999 [2024-11-20 09:16:51.837332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:14.999 [2024-11-20 09:16:51.849739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:14.999 [2024-11-20 09:16:51.850121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:14.999 [2024-11-20 09:16:51.850150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:14.999 [2024-11-20 09:16:51.850165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.000 [2024-11-20 09:16:51.850398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.000 [2024-11-20 09:16:51.850631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.000 [2024-11-20 09:16:51.850651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.000 [2024-11-20 09:16:51.850664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.000 [2024-11-20 09:16:51.850677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.000 [2024-11-20 09:16:51.863062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.000 [2024-11-20 09:16:51.863432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.000 [2024-11-20 09:16:51.863463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.000 [2024-11-20 09:16:51.863479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.000 [2024-11-20 09:16:51.863708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.000 [2024-11-20 09:16:51.863930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.000 [2024-11-20 09:16:51.863950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.000 [2024-11-20 09:16:51.863963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.000 [2024-11-20 09:16:51.863975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.000 [2024-11-20 09:16:51.876532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.000 [2024-11-20 09:16:51.876926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.000 [2024-11-20 09:16:51.876955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.000 [2024-11-20 09:16:51.876971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.000 [2024-11-20 09:16:51.877186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.000 [2024-11-20 09:16:51.877423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.000 [2024-11-20 09:16:51.877446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.000 [2024-11-20 09:16:51.877461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.000 [2024-11-20 09:16:51.877474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.000 [2024-11-20 09:16:51.877925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.000 [2024-11-20 09:16:51.877974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.000 [2024-11-20 09:16:51.877988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.000 [2024-11-20 09:16:51.877999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.000 [2024-11-20 09:16:51.878009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.000 [2024-11-20 09:16:51.879474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.000 [2024-11-20 09:16:51.879501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.000 [2024-11-20 09:16:51.879506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.000 [2024-11-20 09:16:51.890109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.000 [2024-11-20 09:16:51.890611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.000 [2024-11-20 09:16:51.890652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.000 [2024-11-20 09:16:51.890672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.000 [2024-11-20 09:16:51.890895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.000 [2024-11-20 09:16:51.891120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.000 [2024-11-20 09:16:51.891141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.000 [2024-11-20 09:16:51.891157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.000 [2024-11-20 09:16:51.891174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.000 [2024-11-20 09:16:51.903811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.000 [2024-11-20 09:16:51.904280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.000 [2024-11-20 09:16:51.904339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.000 [2024-11-20 09:16:51.904360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.000 [2024-11-20 09:16:51.904583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.000 [2024-11-20 09:16:51.904814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.000 [2024-11-20 09:16:51.904836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.000 [2024-11-20 09:16:51.904852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.000 [2024-11-20 09:16:51.904868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.000 [2024-11-20 09:16:51.917441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.000 [2024-11-20 09:16:51.917954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.000 [2024-11-20 09:16:51.917993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.000 [2024-11-20 09:16:51.918013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.000 [2024-11-20 09:16:51.918236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.000 [2024-11-20 09:16:51.918473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.000 [2024-11-20 09:16:51.918497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.000 [2024-11-20 09:16:51.918514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.000 [2024-11-20 09:16:51.918530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.000 [2024-11-20 09:16:51.931003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.000 [2024-11-20 09:16:51.931480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.000 [2024-11-20 09:16:51.931526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.000 [2024-11-20 09:16:51.931546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.000 [2024-11-20 09:16:51.931769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.000 [2024-11-20 09:16:51.931992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.000 [2024-11-20 09:16:51.932014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.000 [2024-11-20 09:16:51.932030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.000 [2024-11-20 09:16:51.932045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.000 [2024-11-20 09:16:51.944723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.000 [2024-11-20 09:16:51.945256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.000 [2024-11-20 09:16:51.945294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.000 [2024-11-20 09:16:51.945324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.000 [2024-11-20 09:16:51.945550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.000 [2024-11-20 09:16:51.945775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.000 [2024-11-20 09:16:51.945798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.000 [2024-11-20 09:16:51.945816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.000 [2024-11-20 09:16:51.945832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.000 [2024-11-20 09:16:51.958376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.000 [2024-11-20 09:16:51.958854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.000 [2024-11-20 09:16:51.958893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.000 [2024-11-20 09:16:51.958914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.000 [2024-11-20 09:16:51.959138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.000 [2024-11-20 09:16:51.959374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.001 [2024-11-20 09:16:51.959398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.001 [2024-11-20 09:16:51.959415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.001 [2024-11-20 09:16:51.959432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.001 [2024-11-20 09:16:51.972041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.001 [2024-11-20 09:16:51.972411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.001 [2024-11-20 09:16:51.972440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.001 [2024-11-20 09:16:51.972456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.001 [2024-11-20 09:16:51.972684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.001 [2024-11-20 09:16:51.972903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.001 [2024-11-20 09:16:51.972924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.001 [2024-11-20 09:16:51.972937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.001 [2024-11-20 09:16:51.972950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.001 [2024-11-20 09:16:51.985528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.001 [2024-11-20 09:16:51.985851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.001 [2024-11-20 09:16:51.985879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.001 [2024-11-20 09:16:51.985895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.001 [2024-11-20 09:16:51.986109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.001 [2024-11-20 09:16:51.986336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.001 [2024-11-20 09:16:51.986358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.001 [2024-11-20 09:16:51.986371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.001 [2024-11-20 09:16:51.986383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.001 [2024-11-20 09:16:51.999056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.001 [2024-11-20 09:16:51.999380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.001 [2024-11-20 09:16:51.999410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.001 [2024-11-20 09:16:51.999425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.001 [2024-11-20 09:16:51.999639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.001 [2024-11-20 09:16:51.999857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.001 [2024-11-20 09:16:51.999879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.001 [2024-11-20 09:16:51.999892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.001 [2024-11-20 09:16:51.999905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.001 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:15.001 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:15.001 09:16:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:15.001 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:15.001 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.001 [2024-11-20 09:16:52.012753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.001 [2024-11-20 09:16:52.013122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.001 [2024-11-20 09:16:52.013151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.001 [2024-11-20 09:16:52.013173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.001 [2024-11-20 09:16:52.013401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.001 [2024-11-20 09:16:52.013620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.001 [2024-11-20 09:16:52.013641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.001 [2024-11-20 09:16:52.013656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.001 [2024-11-20 09:16:52.013669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.261 [2024-11-20 09:16:52.026354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.261 [2024-11-20 09:16:52.026717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.261 [2024-11-20 09:16:52.026747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.261 [2024-11-20 09:16:52.026763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.261 [2024-11-20 09:16:52.026979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.261 [2024-11-20 09:16:52.027198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.261 [2024-11-20 09:16:52.027220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.261 [2024-11-20 09:16:52.027234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.261 [2024-11-20 09:16:52.027247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.261 [2024-11-20 09:16:52.029711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.261 [2024-11-20 09:16:52.039947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.261 [2024-11-20 09:16:52.040277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.261 [2024-11-20 09:16:52.040313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.261 [2024-11-20 09:16:52.040332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.261 [2024-11-20 09:16:52.040546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.261 [2024-11-20 09:16:52.040765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.261 [2024-11-20 09:16:52.040786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.261 [2024-11-20 09:16:52.040806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.261 [2024-11-20 09:16:52.040820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.261 [2024-11-20 09:16:52.053714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.261 [2024-11-20 09:16:52.054143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.261 [2024-11-20 09:16:52.054178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.261 [2024-11-20 09:16:52.054198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.261 [2024-11-20 09:16:52.054428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.261 [2024-11-20 09:16:52.054650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.261 [2024-11-20 09:16:52.054672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.261 [2024-11-20 09:16:52.054702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.261 [2024-11-20 09:16:52.054716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.261 [2024-11-20 09:16:52.067369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.261 [2024-11-20 09:16:52.067739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.261 [2024-11-20 09:16:52.067767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.261 [2024-11-20 09:16:52.067784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.261 [2024-11-20 09:16:52.067998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.261 [2024-11-20 09:16:52.068217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.261 [2024-11-20 09:16:52.068238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.261 [2024-11-20 09:16:52.068253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.261 [2024-11-20 09:16:52.068266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.261 Malloc0 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.261 [2024-11-20 09:16:52.080911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.261 [2024-11-20 09:16:52.081300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.261 [2024-11-20 09:16:52.081337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.261 [2024-11-20 09:16:52.081355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.261 [2024-11-20 09:16:52.081572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.261 [2024-11-20 09:16:52.081793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.261 [2024-11-20 09:16:52.081814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.261 [2024-11-20 09:16:52.081837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.261 [2024-11-20 09:16:52.081851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.261 [2024-11-20 09:16:52.094520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.261 [2024-11-20 09:16:52.094883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.261 [2024-11-20 09:16:52.094911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe4a40 with addr=10.0.0.2, port=4420 00:26:15.261 [2024-11-20 09:16:52.094928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4a40 is same with the state(6) to be set 00:26:15.261 [2024-11-20 09:16:52.095141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe4a40 (9): Bad file descriptor 00:26:15.261 [2024-11-20 09:16:52.095370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:15.261 [2024-11-20 09:16:52.095392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:15.261 [2024-11-20 09:16:52.095405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:15.261 [2024-11-20 09:16:52.095418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:15.261 [2024-11-20 09:16:52.096104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.261 09:16:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3437726 00:26:15.261 [2024-11-20 09:16:52.108043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:15.262 [2024-11-20 09:16:52.137013] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:16.635 3557.67 IOPS, 13.90 MiB/s [2024-11-20T08:16:54.596Z] 4200.43 IOPS, 16.41 MiB/s [2024-11-20T08:16:55.530Z] 4691.12 IOPS, 18.32 MiB/s [2024-11-20T08:16:56.462Z] 5075.56 IOPS, 19.83 MiB/s [2024-11-20T08:16:57.396Z] 5379.50 IOPS, 21.01 MiB/s [2024-11-20T08:16:58.328Z] 5631.45 IOPS, 22.00 MiB/s [2024-11-20T08:16:59.260Z] 5836.17 IOPS, 22.80 MiB/s [2024-11-20T08:17:00.632Z] 6002.31 IOPS, 23.45 MiB/s [2024-11-20T08:17:01.567Z] 6146.00 IOPS, 24.01 MiB/s 00:26:24.538 Latency(us) 00:26:24.538 [2024-11-20T08:17:01.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.538 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:24.538 Verification LBA range: start 0x0 length 0x4000 00:26:24.538 Nvme1n1 : 15.01 6275.71 24.51 9957.18 0.00 7862.40 2548.62 17961.72 00:26:24.538 [2024-11-20T08:17:01.567Z] =================================================================================================================== 00:26:24.538 [2024-11-20T08:17:01.567Z] Total : 6275.71 24.51 9957.18 0.00 7862.40 2548.62 17961.72 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:24.538 rmmod nvme_tcp 00:26:24.538 rmmod nvme_fabrics 00:26:24.538 rmmod nvme_keyring 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3438501 ']' 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3438501 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3438501 ']' 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3438501 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3438501 00:26:24.538 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3438501' 00:26:24.832 killing process with pid 3438501 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3438501 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3438501 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.832 09:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.387 00:26:27.387 real 0m22.749s 00:26:27.387 user 1m0.401s 00:26:27.387 sys 0m4.451s 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.387 ************************************ 00:26:27.387 END TEST nvmf_bdevperf 00:26:27.387 ************************************ 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.387 ************************************ 00:26:27.387 START TEST nvmf_target_disconnect 00:26:27.387 ************************************ 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:27.387 * Looking for test storage... 00:26:27.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:26:27.387 09:17:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:27.387 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:27.387 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.387 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.387 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.387 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.387 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.387 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:27.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.388 --rc genhtml_branch_coverage=1 00:26:27.388 --rc genhtml_function_coverage=1 00:26:27.388 --rc genhtml_legend=1 00:26:27.388 --rc geninfo_all_blocks=1 00:26:27.388 --rc geninfo_unexecuted_blocks=1 00:26:27.388 00:26:27.388 ' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:27.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.388 --rc genhtml_branch_coverage=1 00:26:27.388 --rc genhtml_function_coverage=1 00:26:27.388 --rc genhtml_legend=1 00:26:27.388 --rc geninfo_all_blocks=1 00:26:27.388 --rc geninfo_unexecuted_blocks=1 00:26:27.388 00:26:27.388 ' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:27.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.388 --rc genhtml_branch_coverage=1 00:26:27.388 --rc genhtml_function_coverage=1 00:26:27.388 --rc genhtml_legend=1 00:26:27.388 --rc geninfo_all_blocks=1 00:26:27.388 --rc geninfo_unexecuted_blocks=1 00:26:27.388 00:26:27.388 ' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:27.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:27.388 --rc genhtml_branch_coverage=1 00:26:27.388 --rc genhtml_function_coverage=1 00:26:27.388 --rc genhtml_legend=1 00:26:27.388 --rc geninfo_all_blocks=1 00:26:27.388 --rc geninfo_unexecuted_blocks=1 00:26:27.388 00:26:27.388 ' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:27.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.388 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.389 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:27.389 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:27.389 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:27.389 09:17:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:29.292 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:29.292 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:29.292 Found net devices under 0000:09:00.0: cvl_0_0 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:29.292 Found net devices under 0000:09:00.1: cvl_0_1 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.292 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:26:29.293 00:26:29.293 --- 10.0.0.2 ping statistics --- 00:26:29.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.293 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:26:29.293 00:26:29.293 --- 10.0.0.1 ping statistics --- 00:26:29.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.293 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:29.293 ************************************ 00:26:29.293 START TEST nvmf_target_disconnect_tc1 00:26:29.293 ************************************ 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:29.293 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:29.551 [2024-11-20 09:17:06.388914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.551 [2024-11-20 09:17:06.388987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eaf40 with addr=10.0.0.2, port=4420 00:26:29.551 [2024-11-20 09:17:06.389019] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:29.551 [2024-11-20 09:17:06.389039] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:29.551 [2024-11-20 09:17:06.389053] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:29.551 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:29.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:29.551 Initializing NVMe Controllers 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:29.551 00:26:29.551 real 0m0.104s 00:26:29.551 user 0m0.051s 00:26:29.551 sys 0m0.052s 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:29.551 ************************************ 00:26:29.551 END TEST nvmf_target_disconnect_tc1 00:26:29.551 ************************************ 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:29.551 ************************************ 00:26:29.551 START TEST nvmf_target_disconnect_tc2 00:26:29.551 ************************************ 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3441662 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3441662 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3441662 ']' 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:29.551 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.551 [2024-11-20 09:17:06.507263] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:26:29.551 [2024-11-20 09:17:06.507392] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.809 [2024-11-20 09:17:06.587469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:29.809 [2024-11-20 09:17:06.649057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.809 [2024-11-20 09:17:06.649125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.809 [2024-11-20 09:17:06.649153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.809 [2024-11-20 09:17:06.649164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.809 [2024-11-20 09:17:06.649174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.809 [2024-11-20 09:17:06.650819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:29.809 [2024-11-20 09:17:06.650887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:29.809 [2024-11-20 09:17:06.650949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:29.809 [2024-11-20 09:17:06.650953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:29.809 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:29.809 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:29.809 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.809 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:29.809 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.809 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.809 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:29.810 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.810 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 Malloc0 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 [2024-11-20 09:17:06.845368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 [2024-11-20 09:17:06.873630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3441690 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:30.067 09:17:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:31.975 09:17:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3441662 00:26:31.975 09:17:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 [2024-11-20 09:17:08.898238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 [2024-11-20 09:17:08.898610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Read completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 Write completed with error (sct=0, sc=8) 00:26:31.975 starting I/O failed 00:26:31.975 [2024-11-20 09:17:08.898957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.975 [2024-11-20 09:17:08.899167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.975 [2024-11-20 09:17:08.899206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.975 qpair failed and we were unable to recover it. 00:26:31.975 [2024-11-20 09:17:08.899319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.975 [2024-11-20 09:17:08.899347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.899454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.899481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.899580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.899606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.899718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.899744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.899830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.899856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.899946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.899972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.900070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.900097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.900218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.900260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.900371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.900399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.900498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.900524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.900665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.900693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.900815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.900841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.900960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.900986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.901070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.901095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.901179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.901205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.901337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.901363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.901461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.901486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.901569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.901602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.901723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.901749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.901885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.901910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.902030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.902056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.902194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.902220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.902348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.902387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.902481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.902510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.902633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.902660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.902771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.902797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.902904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.902931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.903045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.903071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.903184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.903210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.903322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.903349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.903438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.903464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.903554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.903579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.903695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.903721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.903830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.903855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 [2024-11-20 09:17:08.903980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.976 [2024-11-20 09:17:08.904007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.976 qpair failed and we were unable to recover it. 00:26:31.976 Read completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Read completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Read completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Read completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Read completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Read completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Read completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Write completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Read completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Write completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.976 Read completed with error (sct=0, sc=8) 00:26:31.976 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Write completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Write completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Write completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Write completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Write completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 Read completed with error (sct=0, sc=8) 00:26:31.977 starting I/O failed 00:26:31.977 [2024-11-20 09:17:08.904325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.977 [2024-11-20 09:17:08.904408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.904455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.904555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.904586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.904706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.904733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.904854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.904880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.905021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.905047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.905166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.905194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.905284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.905316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.905407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.905434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.905527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.905554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.905674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.905700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.905818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.905844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.905924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.905951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.906089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.906115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.906227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.906253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.906366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.906395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.906497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.906547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.906665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.906692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.906821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.906847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.906963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.906994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.907076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.907103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.907188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.907214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.907318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.907346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.907459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.907498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.907586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.907613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.907754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.907780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.907867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.907894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.907976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.908002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.908080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.908106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.908248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.908274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.977 qpair failed and we were unable to recover it. 00:26:31.977 [2024-11-20 09:17:08.908366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.977 [2024-11-20 09:17:08.908394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.908471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.908498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.908592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.908618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.908740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.908766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.908855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.908883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.909000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.909027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.909108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.909134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.909229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.909255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.909371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.909411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.909515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.909543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.909692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.909719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.909830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.909856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.909950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.909979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.910087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.910127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.910247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.910275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.910376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.910402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.910492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.910523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.910618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.910644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.910759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.910785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.910900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.910927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.911014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.911041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.911129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.911155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.911236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.911262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.911356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.911384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.911497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.911524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.911612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.911639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.911716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.911742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.911836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.911862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.911957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.911983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.912070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.912096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.912192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.912219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.912327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.912354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.912437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.912463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.912548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.912574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.912683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.912709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.912854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.912881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.912963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.912988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.978 [2024-11-20 09:17:08.913096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.978 [2024-11-20 09:17:08.913123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.978 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.913220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.913259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.914329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.914369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.914474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.914504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.914605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.914631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.914739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.914766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.914885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.914912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.915023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.915051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.915195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.915221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.915339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.915366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.915454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.915480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.915599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.915626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.915767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.915793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.915887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.915913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.916000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.916026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.916139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.916165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.916278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.916314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.916406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.916433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.916530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.916569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.916687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.916720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.916813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.916840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.916955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.916980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.917090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.917129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.917292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.917339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.917436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.917464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.917548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.917575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.917680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.917707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.917831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.917857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.917948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.917976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.918070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.918096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.918186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.918227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.918335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.979 [2024-11-20 09:17:08.918364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.979 qpair failed and we were unable to recover it. 00:26:31.979 [2024-11-20 09:17:08.918494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.918532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.918663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.918690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.918807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.918833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.918918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.918944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.919071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.919110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.919209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.919237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.919370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.919410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.919535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.919563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.919678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.919704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.919797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.919824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.919946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.919973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.920120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.920148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.920259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.920299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.920414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.920442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.920527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.920554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.920693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.920718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.920857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.920883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.920957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.920982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.921102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.921130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.921239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.921265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.921383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.921412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.921505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.921532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.921634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.921661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.921777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.921804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.921896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.921922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.922064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.922093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.922206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.922232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.922317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.922345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.922439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.922465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.922549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.922574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.922681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.922706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.922800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.922827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.922905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.922930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.923049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.923074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.923180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.923208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.923295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.923328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.923445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.980 [2024-11-20 09:17:08.923472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.980 qpair failed and we were unable to recover it. 00:26:31.980 [2024-11-20 09:17:08.923551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.923577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.923690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.923716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.923834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.923860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.923950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.923976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.924139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.924178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.924272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.924300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.924400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.924427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.924514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.924542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.924627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.924655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.924738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.924765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.924855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.924881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.924967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.924993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.925096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.925122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.925236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.925263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.925363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.925392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.925482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.925508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.925624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.925651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.925760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.925792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.925905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.925930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.926045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.926072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.926151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.926178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.926294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.926327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.926439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.926467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.926549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.926575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.926714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.926739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.926819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.926845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.926939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.926965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.927077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.927102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.927187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.927214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.927348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.927377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.927482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.927521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.927619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.927647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.927755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.927781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.927871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.927897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.927984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.928011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.928126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.928152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.928268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.981 [2024-11-20 09:17:08.928293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.981 qpair failed and we were unable to recover it. 00:26:31.981 [2024-11-20 09:17:08.928415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.928440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.928555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.928581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.928656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.928681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.928799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.928824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.928962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.928987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.929076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.929102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.929193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.929221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.929310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.929343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.929458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.929484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.929619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.929645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.929738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.929763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.929874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.929900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.929986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.930012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.930124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.930151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.930268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.930293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.930395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.930421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.930503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.930528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.930638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.930663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.930751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.930777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.930866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.930895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.931007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.931034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.931119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.931146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.931233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.931261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.931361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.931388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.931477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.931504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.931596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.931622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.931711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.931736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.931823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.931850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.931960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.931986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.932066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.932091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.932190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.932228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.932352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.932380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.932517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.932543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.932651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.932677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.932764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.932793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.932911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.932937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.933033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.982 [2024-11-20 09:17:08.933058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.982 qpair failed and we were unable to recover it. 00:26:31.982 [2024-11-20 09:17:08.933138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.933165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.933249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.933275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.933395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.933421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.933501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.933527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.933626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.933652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.933762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.933788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.933904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.933932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.934020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.934046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.934137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.934165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.934281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.934322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.934419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.934445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.934527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.934553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.934668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.934695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.934817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.934843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.934959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.934987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.935109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.935136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.935221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.935247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.935338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.935365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.935459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.935484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.935597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.935622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.935734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.935760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.935855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.935882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.935963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.935988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.936102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.936128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.936239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.936264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.936369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.936408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.936554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.936583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.936698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.936724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.936865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.936891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.937006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.937033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.937145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.937172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.937256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.937282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.937375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.937403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.937516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.937542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.937677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.937702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.937816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.937841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.937964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.937989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.983 [2024-11-20 09:17:08.938072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.983 [2024-11-20 09:17:08.938103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.983 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.938244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.938270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.938390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.938416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.938507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.938532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.938611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.938636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.938751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.938776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.938859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.938884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.938973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.938998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.939130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.939169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.939299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.939348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.939488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.939527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.939657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.939685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.939782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.939808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.939894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.939920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.940044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.940070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.940181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.940207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.940318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.940343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.940460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.940486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.940574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.940599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.940710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.940735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.940807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.940833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.940922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.940947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.941061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.941088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.941177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.941205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.941332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.941372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.941457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.941484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.941599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.941625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.941713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.941745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.941858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.941884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.941970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.941996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.942135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.942160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.942277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.942309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.942403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.984 [2024-11-20 09:17:08.942429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.984 qpair failed and we were unable to recover it. 00:26:31.984 [2024-11-20 09:17:08.942521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.942550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.942642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.942668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.942758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.942784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.942870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.942897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.943007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.943033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.943155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.943182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.943294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.943328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.943417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.943443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.943565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.943591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.943703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.943728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.943806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.943832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.943944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.943971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.944092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.944118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.944230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.944256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.944375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.944402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.944494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.944521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.944606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.944632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.944743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.944770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.944890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.944918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.945009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.945035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.945124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.945154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.945271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.945309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.945405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.945432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.945547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.945573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.945714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.945740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.945839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.945865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.945947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.945973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.946082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.946108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.946187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.946214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.946321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.946348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.946432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.946457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.946553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.946579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.946691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.946716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.946805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.946830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.946968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.946993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.947088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.947117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.947237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.947263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.985 qpair failed and we were unable to recover it. 00:26:31.985 [2024-11-20 09:17:08.947355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.985 [2024-11-20 09:17:08.947383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.947494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.947520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.947605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.947631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.947716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.947743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.947826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.947852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.947937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.947964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.948115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.948154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.948271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.948299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.948405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.948432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.948540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.948566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.948646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.948673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.948796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.948822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.948912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.948939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.949055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.949082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.949194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.949220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.949334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.949361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.949449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.949475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.949584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.949610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.949693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.949721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.949841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.949866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.949950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.949977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.950096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.950122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.950247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.950277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.950394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.950433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.950543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.950575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.950663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.950688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.950764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.950790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.950906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.950931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.951012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.951038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.951144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.951170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.951249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.951274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.951362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.951390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.951467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.951493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.951577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.951603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.951687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.951715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.951836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.951863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.951981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.952007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.986 qpair failed and we were unable to recover it. 00:26:31.986 [2024-11-20 09:17:08.952094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.986 [2024-11-20 09:17:08.952121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.952219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.952258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.952389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.952418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.952514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.952541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.952642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.952669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.952763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.952789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.952930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.952958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.953064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.953091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.953221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.953260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.953385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.953413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.953496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.953522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.953613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.953638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.953761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.953786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.953902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.953927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.954010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.954037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.954156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.954184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.954272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.954299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.954406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.954432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.954523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.954549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.954659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.954686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.954778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.954805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.954917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.954944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.955068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.955106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.955223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.955250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.955370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.955397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.955479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.955506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.955626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.955652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.955762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.955794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.955882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.955909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.956005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.956033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.956164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.956203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.956312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.956341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.956483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.956509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.956623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.956649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.956733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.956761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.956876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.956902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.957019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.957046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.987 [2024-11-20 09:17:08.957176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.987 [2024-11-20 09:17:08.957214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.987 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.957316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.957344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.957457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.957483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.957575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.957600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.957688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.957714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.957823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.957849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.957929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.957954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.958090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.958115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.958213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.958252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.958388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.958417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.958516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.958544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.958660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.958686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.958765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.958791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.958876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.958903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.959021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.959048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.959149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.959189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.959340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.959369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.959487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.959521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.959654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.959681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.959818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.959845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.959928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.959955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.960042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.960068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.960168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.960207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.960319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.960347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.960431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.960458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.960600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.960625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.960715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.960740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.960818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.960843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.960934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.960960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.961081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.961106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.961219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.961244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.961363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.961389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.961507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.961532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.961643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.961669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.961749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.961774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.961882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.961908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.961995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.962020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.962147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.988 [2024-11-20 09:17:08.962186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.988 qpair failed and we were unable to recover it. 00:26:31.988 [2024-11-20 09:17:08.962326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.962366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.962460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.962488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.962616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.962643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.962749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.962775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.962863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.962889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.962983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.963009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.963174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.963214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.963359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.963388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.963481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.963508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.963610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.963636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.963717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.963742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.963844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.963870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.963994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.964021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.964111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.964138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.964253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.964279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.964372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.964400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.964515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.964553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.964647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.964675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.964799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.964825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.964916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.964941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.965062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.965088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.965175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.965201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.965323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.965350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.965494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.965533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.965668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.965696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.965812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.965839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.965918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.965945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.966040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.966079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.966214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.966253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.966382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.966409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.966521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.966548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.966636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.989 [2024-11-20 09:17:08.966663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.989 qpair failed and we were unable to recover it. 00:26:31.989 [2024-11-20 09:17:08.966750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.966776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.966867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.966894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.967003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.967043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.967167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.967194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.967281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.967313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.967426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.967452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.967566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.967592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.967696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.967722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.967815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.967842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.967962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.967992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.968110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.968138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.968239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.968279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.968418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.968446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.968529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.968556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.968666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.968699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.968783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.968810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.968905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.968933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.969037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.969077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.969201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.969228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.969318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.969346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.969430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.969457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.969548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.969575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.969692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.969719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.969856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.969884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.970000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.970029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.970156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.970195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.970289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.970326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.970443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.970469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.970588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.970614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.970704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.970730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.970812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.970838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.970949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.970975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.971062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.971089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.971171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.971198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.971313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.971341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.971434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.971461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.971545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.971572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.990 [2024-11-20 09:17:08.971687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.990 [2024-11-20 09:17:08.971713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.990 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.971795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.971822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.971908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.971937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.972025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.972051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.972167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.972200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.972323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.972350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.972465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.972491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.972641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.972667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.972779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.972805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.972921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.972948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.973061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.973087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.973190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.973229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.973320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.973349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.973432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.973458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.973552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.973577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.973667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.973692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.973802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.973827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.973940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.973966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.974048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.974073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.974187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.974216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.974316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.974345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.974432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.974459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.974568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.974594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.974704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.974730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.974844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.974870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.974957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.974984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.975100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.975127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.975261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.975299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.975438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.975465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.975553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.975580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.975697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.975723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.975861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.975889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.975977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.976003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.976096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.976134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.976229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.976256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.976368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.976396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.976477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.976503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.976582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.991 [2024-11-20 09:17:08.976608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.991 qpair failed and we were unable to recover it. 00:26:31.991 [2024-11-20 09:17:08.976685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.976710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.976786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.976810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.976923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.976947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.977053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.977078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.977186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.977211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.977327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.977355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.977454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.977493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.977620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.977647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.977745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.977773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.977857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.977884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.977969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.977996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.978082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.978109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.978222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.978248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.978343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.978369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.978473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.978499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.978581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.978607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.978727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.978753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.978869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.978895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.979008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.979033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.979116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.979141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.979225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.979254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.979388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.979427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.979536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.979575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.979665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.979693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.979808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.979834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.979926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.979952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.980039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.980065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.980177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.980203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.980313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.980339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.980423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.980448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.980533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.980560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.980677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.980702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.980818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.980845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.980953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.980998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.981094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.981124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.981241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.981268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.981363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.981389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.992 [2024-11-20 09:17:08.981547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.992 [2024-11-20 09:17:08.981573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.992 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.981690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.981718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.981808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.981835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.981960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.981985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.982096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.982122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.982231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.982256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.982384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.982414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.982535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.982562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.982653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.982679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.982789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.982816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.982904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.982931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.983014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.983040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.983177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.983203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.983292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.983340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.983435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.983462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.983606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.983632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.983740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.983766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.983860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.983898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.983984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.984010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.984105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.984132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.984247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.984273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.984387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.984414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.984553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.984580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.984668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.984700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.984789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.984816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.984952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.984979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.985060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.985086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.985202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.985232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.985369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.985409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.985539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.985578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.985667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.985694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.985814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.985840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.985954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.985982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.993 [2024-11-20 09:17:08.986098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.993 [2024-11-20 09:17:08.986126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.993 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.986214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.986240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.986329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.986357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.986461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.986488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.986578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.986604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.986696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.986722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.986807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.986835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.986923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.986952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.987038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.987065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.987203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.987251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.987414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.987443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.987538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.987576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.987696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.987724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.987848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.987873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.987952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.987978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.988089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.988115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.988200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.988227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.988323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.988352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.988498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.988537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.988623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.988650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.988762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.988788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.988901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.988927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.989054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.989093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.989207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.989235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.989321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.989349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.989445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.989471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.989587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.989615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.989701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.989727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.989818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.989845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.989926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.989951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.990042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.990077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.990199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.990226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.990312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.990340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.990452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.990478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.990590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.990616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.990731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.990757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.990899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.990926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.991043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.994 [2024-11-20 09:17:08.991069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.994 qpair failed and we were unable to recover it. 00:26:31.994 [2024-11-20 09:17:08.991156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-20 09:17:08.991181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-20 09:17:08.991259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-20 09:17:08.991285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-20 09:17:08.991382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-20 09:17:08.991408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-20 09:17:08.991490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-20 09:17:08.991516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:31.995 qpair failed and we were unable to recover it. 00:26:31.995 [2024-11-20 09:17:08.991599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.995 [2024-11-20 09:17:08.991626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.991736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.991762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.991853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.991879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.991991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.992016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.992105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.992130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.992250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.992279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.992379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.992411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.992499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.992526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.992611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.992637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.992738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.992765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.992852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.992878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.992967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.992994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.993080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.993105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.993196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.993236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.993325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.993353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.993485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.993514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.993603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.993645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.284 [2024-11-20 09:17:08.993746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.284 [2024-11-20 09:17:08.993773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.284 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.993863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.993891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.994008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.994035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.994127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.994153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.994277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.994322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.994442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.994469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.994548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.994576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.994660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.994686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.994803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.994829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.994905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.994931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.995021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.995048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.995161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.995187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.995312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.995339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.995435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.995463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.995553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.995580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.995670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.995696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.995808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.995835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.995947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.995973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.996125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.996165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.996286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.996322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.996427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.996454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.996566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.996592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.996678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.996704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.996797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.996825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.996910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.996937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.997062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.997089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.997202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.997229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.997344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.997372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.997470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.997509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.997600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.997627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.997715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.997743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.997826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.997852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.997936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.997961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.998059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.998084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.998223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.998250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.998389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.998428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.998553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.998581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.998694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.998720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.998825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.998856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.998964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.998993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.999083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.999110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.999238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.999266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.999389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.999417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.999506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.999532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.999644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.999670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.285 qpair failed and we were unable to recover it. 00:26:32.285 [2024-11-20 09:17:08.999783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.285 [2024-11-20 09:17:08.999809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:08.999898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:08.999926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.000019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.000047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.000133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.000160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.000243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.000270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.000367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.000396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.000475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.000501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.000616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.000642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.000725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.000751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.000864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.000891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.001003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.001029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.001141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.001168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.001264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.001309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.001409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.001438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.001533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.001560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.001647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.001673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.001751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.001779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.001876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.001902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.002007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.002047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.002173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.002202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.002293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.002326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.002414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.002440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.002553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.002580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.002659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.002685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.002767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.002794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.002887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.002913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.002998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.003024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.003112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.003138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.003223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.003247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.003345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.003375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.003468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.003495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.003606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.003634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.003769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.003795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.003909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.003938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.004065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.004091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.004206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.004233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.004328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.004355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.004472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.004498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.004579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.004605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.004682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.004708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.004799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.286 [2024-11-20 09:17:09.004825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.286 qpair failed and we were unable to recover it. 00:26:32.286 [2024-11-20 09:17:09.004917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.004943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.005026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.005052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.005145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.005183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.005314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.005343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.005436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.005464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.005577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.005604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.005754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.005782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.005896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.005923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.006062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.006088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.006199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.006225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.006350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.006377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.006470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.006495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.006583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.006610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.006723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.006750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.006833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.006859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.006996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.007022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.007120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.007160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.007257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.007284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.007376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.007405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.007512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.007543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.007665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.007691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.007808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.007835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.007919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.007945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.008057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.008084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.008218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.008257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.008353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.008381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.008464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.008490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.008575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.008605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.008696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.008722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.008827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.008853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.008965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.008992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.009102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.009132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.009218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.009244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.009373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.009402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.009485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.009512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.009620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.009647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.009760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.009786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.009874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.009902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.009996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.010023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.010163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.010310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.010338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.010454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.010480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.010564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.010589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.010685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.010713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.010801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.010828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.010940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.287 [2024-11-20 09:17:09.010967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.287 qpair failed and we were unable to recover it. 00:26:32.287 [2024-11-20 09:17:09.011060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.011088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.011186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.011226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.011350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.011378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.011468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.011495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.011586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.011613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.011722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.011748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.011854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.011879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.011994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.012022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.012143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.012170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.012284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.012315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.012406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.012433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.012547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.012573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.012720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.012747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.012863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.012896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.012992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.013018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.013103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.013129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.013279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.013316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.013407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.013433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.013547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.013573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.013686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.013713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.013796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.013821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.013928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.013953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.014066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.014094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.014179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.014205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.014321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.014349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.014431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.014459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.014570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.014596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.014686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.014713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.014854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.014880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.014975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.015002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.015105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.015143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.015227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.015254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.015371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.288 [2024-11-20 09:17:09.015398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.288 qpair failed and we were unable to recover it. 00:26:32.288 [2024-11-20 09:17:09.015485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.015510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.015617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.015642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.015725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.015750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.015873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.015900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.015994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.016020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.016109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.016135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.016252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.016277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.016400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.016435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.016554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.016581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.016697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.016722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.016830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.016855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.016953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.016992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.017082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.017109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.017191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.017216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.017332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.017359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.017450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.017475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.017582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.017607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.017727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.017753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.017843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.017869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.017961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.017990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.018100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.018126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.018244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.018284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.018392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.018419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.018509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.018548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.018663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.018690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.018781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.018807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.018913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.018939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.019067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.019107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.019258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.019286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.019431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.019457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.019553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.019578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.019696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.019722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.019835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.019860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.019952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.019977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.020074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.020120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.020220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.020248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.020344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.020370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.020489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.020516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.020613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.020643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.020742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.020768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.020850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.020876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.020994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.021020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.021140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.021167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.021273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.021300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.021398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.021426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.021508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.021534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.021627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.289 [2024-11-20 09:17:09.021654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.289 qpair failed and we were unable to recover it. 00:26:32.289 [2024-11-20 09:17:09.021764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-20 09:17:09.021791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-20 09:17:09.021892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-20 09:17:09.021919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-20 09:17:09.022026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-20 09:17:09.022053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-20 09:17:09.022137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-20 09:17:09.022163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-20 09:17:09.022310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-20 09:17:09.022337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.290 [2024-11-20 09:17:09.022456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.290 [2024-11-20 09:17:09.022483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.290 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.022564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.022590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.022727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.022754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.022871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.022898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.022988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.023014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.023142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.023180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.023299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.023336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.023435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.023462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.023552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.023578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.023692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.023719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.023804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.023831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.023922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.023948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.024087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.024112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.024220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.314 [2024-11-20 09:17:09.024245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.314 qpair failed and we were unable to recover it. 00:26:32.314 [2024-11-20 09:17:09.024359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.024387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.024486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.024516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.024608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.024635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.024724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.024750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.024847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.024886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.024986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.025015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.025109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.025136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.025250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.025276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.025368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.025400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.025501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.025527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.025609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.025635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.025718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.025744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.025824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.025850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.025928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.025954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.026076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.026116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.026211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.026238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.026318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.026345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.026485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.026511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.026610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.026639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.026755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.026784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.026873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.026900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.027008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.027034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.027133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.027159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.027241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.027267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.027383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.027409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.027505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.027530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.027614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.027640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.027727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.027753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.027858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.027885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.027972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.028000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.315 [2024-11-20 09:17:09.028110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.315 [2024-11-20 09:17:09.028148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.315 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.028244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.028271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.028365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.028392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.028503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.028529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.028620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.028645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.028754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.028781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.028898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.028926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.029045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.029072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.029154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.029181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.029266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.029292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.029387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.029414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.029528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.029556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.029672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.029699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.029786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.029813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.029919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.029945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.030030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.030058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.030168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.030208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.030300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.030334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.030445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.030476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.030591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.030617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.030700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.030725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.030835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.030861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.030974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.030999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.031081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.031107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.031195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.031223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.031342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.031370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.031452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.031478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.031567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.031593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.031683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.031709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.031797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.031827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.031947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.031997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.032113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.032141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.032240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.032267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.032387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.032415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.032528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.032553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.032726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.032774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.032863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.032888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.032972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.033001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.316 [2024-11-20 09:17:09.033099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.316 [2024-11-20 09:17:09.033125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.316 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.033237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.033263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.033354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.033381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.033470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.033498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.033640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.033666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.033756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.033783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.033889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.033915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.034003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.034035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.034127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.034152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.034269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.034296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.034422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.034448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.034558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.034584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.034662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.034688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.034767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.034793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.034908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.034935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.035020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.035046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.035172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.035212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.035324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.035365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.035458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.035485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.035598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.035624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.035763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.035813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.035991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.036043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.036123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.036149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.036280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.036327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.317 [2024-11-20 09:17:09.036427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.317 [2024-11-20 09:17:09.036466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.317 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.036568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.036596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.036683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.036710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.036819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.036869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.037011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.037058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.037199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.037225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.037374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.037405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.037521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.037548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.037682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.037720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.037874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.037925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.038045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.038072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.038185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.038212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.038326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.038364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.038483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.038510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.038602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.038628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.038717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.038743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.038829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.038855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.038994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.039020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.039113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.039141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.039252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.039279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.039378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.039406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.039495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.039521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.039640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.039667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.039781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.039812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.039921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.039947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.040038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.040066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.040150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.040176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.040297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190df30 is same with the state(6) to be set 00:26:32.318 [2024-11-20 09:17:09.040462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.040502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.040599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.040626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.040740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.040766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.040852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.040879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.041019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.041046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.041137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.041163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.041279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.041314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.041433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.041459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.041540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.041566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.041652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.041683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.318 [2024-11-20 09:17:09.041774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.318 [2024-11-20 09:17:09.041800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.318 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.041882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.041909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.041985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.042011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.042119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.042145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.042251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.042290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.042388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.042416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.042521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.042560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.042668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.042696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.042811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.042836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.042924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.042950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.043080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.043141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.043261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.043286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.043403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.043429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.043548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.043576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.043720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.043747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.043887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.043914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.044024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.044050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.044129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.044155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.044293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.044329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.044471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.044497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.044582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.044608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.044742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.044768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.044852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.044878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.044978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.045003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.045084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.045110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.045222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.045248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.045375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.045401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.045487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.045512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.045595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.045621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.045697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.045723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.045834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.045860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.045944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.045973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.046058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.046085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.046189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.319 [2024-11-20 09:17:09.046216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.319 qpair failed and we were unable to recover it. 00:26:32.319 [2024-11-20 09:17:09.046350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.046377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.046465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.046491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.046599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.046625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.046713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.046740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.046820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.046846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.046950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.046995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.047093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.047121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.047235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.047261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.047407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.047434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.047523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.047548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.047636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.047661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.047743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.047770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.047887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.047913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.048008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.048035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.048171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.048197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.048311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.048338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.048424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.048450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.048537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.048564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.048693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.048719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.048809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.048835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.048924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.048951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.049072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.049097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.049188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.049226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.049325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.049352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.049437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.049463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.049558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.049585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.049694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.049720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.049860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.049886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.050000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.050027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.050127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.050165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.050315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.050355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.050457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.050483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.050611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.320 [2024-11-20 09:17:09.050637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.320 qpair failed and we were unable to recover it. 00:26:32.320 [2024-11-20 09:17:09.050761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.050808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.050921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.050969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.051087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.051114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.051241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.051270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.051365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.051393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.051489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.051514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.051606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.051632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.051765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.051791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.051878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.051904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.051994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.052021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.052098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.052124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.052237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.052263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.052368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.052399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.052508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.052534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.052652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.052679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.052795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.052822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.052896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.052922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.053003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.053029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.053143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.053171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.053248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.053274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.053399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.053428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.053554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.053580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.053669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.053697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.053813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.053839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.053957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.053984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.054124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.054150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.054283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.054328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.054449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.054476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.054608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.054647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.054772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.054802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.054893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.054919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.055014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.321 [2024-11-20 09:17:09.055039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.321 qpair failed and we were unable to recover it. 00:26:32.321 [2024-11-20 09:17:09.055177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.055202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.055318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.055344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.055448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.055473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.055557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.055582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.055702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.055730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.055816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.055843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.055959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.055986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.056072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.056106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.056211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.056250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.056373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.056412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.056500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.056528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.056613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.056641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.056812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.056859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.057003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.057039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.057139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.057166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.057271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.057318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.057410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.057438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.057527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.057553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.057639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.057665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.057783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.057808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.057954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.057979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.058092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.058118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.058228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.058254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.058361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.058401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.058498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.058527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.058610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.058637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.058742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.058769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.058860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.058887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.059041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.059080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.059175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.059201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.059315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.059341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.059429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.322 [2024-11-20 09:17:09.059454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.322 qpair failed and we were unable to recover it. 00:26:32.322 [2024-11-20 09:17:09.059542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.059568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.059675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.059701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.059784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.059812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.059904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.059930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.060045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.060072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.060157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.060183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.060290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.060323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.060436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.060462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.060566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.060593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.060703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.060729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.060817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.060844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.060943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.060970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.061053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.061078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.061158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.061184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.061261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.061286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.061467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.061494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.061615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.061641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.061726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.061753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.061855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.061893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.062037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.062065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.062204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.062230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.062336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.062363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.062457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.062483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.062603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.062630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.062726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.062752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.062833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.062859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.062948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.062974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.063084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.063110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.063199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.063224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.063376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.063403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.063487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.063513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.063604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.063631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.063743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.063769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.063887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.063916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.323 qpair failed and we were unable to recover it. 00:26:32.323 [2024-11-20 09:17:09.064014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.323 [2024-11-20 09:17:09.064052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.064187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.064226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.064313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.064341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.064435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.064463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.064558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.064584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.064671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.064698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.064816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.064841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.064926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.064952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.065035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.065065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.065149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.065176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.065291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.065322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.065410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.065435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.065514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.065539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.065650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.065676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.065785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.065811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.065918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.065943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.066058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.066089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.066173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.066202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.066322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.066349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.066433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.066461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.066566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.066593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.066709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.066735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.066870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.066896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.066983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.067009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.067087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.067113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.067225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.067251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.067386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.067425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.067520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.067547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.067641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.067667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.067817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.067843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.067959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.067985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.068135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.068174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.068268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.068295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.068443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.068470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.068562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.068589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.068723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.068775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.068946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.068994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.069108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.069134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.069257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.069288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.069397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.069435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.069534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.324 [2024-11-20 09:17:09.069561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.324 qpair failed and we were unable to recover it. 00:26:32.324 [2024-11-20 09:17:09.069670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.069696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.069783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.069809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.069897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.069926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.070050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.070090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.070224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.070263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.070383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.070423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.070519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.070547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.070710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.070736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.070916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.070971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.071132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.071203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.071323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.071350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.071438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.071466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.071611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.071638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.071765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.071816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.071965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.072010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.072101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.072128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.072239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.072266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.072386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.072426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.072574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.072602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.072721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.072747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.072827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.072853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.073000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.073026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.073131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.073157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.073247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.073274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.073393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.073432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.073560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.073600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.073767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.073822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.073971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.074019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.074109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.074136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.074220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.074247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.074334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.074361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.074457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.074483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.074569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.074595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.074701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.074727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.074813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.074845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.074963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.074989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.075129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.075155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.075278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.075324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.075420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.075447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.075537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.075563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.075676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.325 [2024-11-20 09:17:09.075723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.325 qpair failed and we were unable to recover it. 00:26:32.325 [2024-11-20 09:17:09.075861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.075908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.075983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.076008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.076118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.076143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.076260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.076286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.076424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.076463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.076585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.076613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.076729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.076756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.076849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.076877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.076968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.076997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.077085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.077112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.077200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.077226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.077314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.077341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.077431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.077457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.077537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.077562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.077646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.077674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.077793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.077820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.077912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.077940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.078054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.078080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.078182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.078220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.078344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.078372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.078457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.078491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.078581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.078607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.078745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.078772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.078902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.078950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.079059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.079084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.079198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.079224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.079339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.079367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.079455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.079481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.079563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.079588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.079674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.079700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.079809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.079835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.079927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.079953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.080031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.080056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.080141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.080166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.080322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.080349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.080428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.080453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.080564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.080591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.080701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.080727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.080836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.080861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.080938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.080964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.081092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.326 [2024-11-20 09:17:09.081132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.326 qpair failed and we were unable to recover it. 00:26:32.326 [2024-11-20 09:17:09.081218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.081247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.081368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.081395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.081483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.081509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.081597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.081624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.081763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.081811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.081933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.081981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.082073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.082103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.082225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.082252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.082368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.082396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.082482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.082508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.082618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.082646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.082764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.082790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.082882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.082909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.083009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.083048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.083157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.083184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.083292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.083326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.083443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.083468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.083602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.083628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.083738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.083764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.083844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.083870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.083971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.084000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.084087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.084115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.084236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.084262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.084381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.084408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.084490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.084519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.084627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.084663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.084782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.084809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.084891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.084917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.085022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.085047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.085138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.085165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.085255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.085283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.085406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.085434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.085547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.085574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.085666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.085693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.085836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.085883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.327 qpair failed and we were unable to recover it. 00:26:32.327 [2024-11-20 09:17:09.086015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.327 [2024-11-20 09:17:09.086064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.086153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.086180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.086268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.086296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.086408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.086435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.086527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.086556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.086665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.086691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.086787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.086813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.086897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.086924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.087036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.087063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.087154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.087180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.087270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.087297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.087418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.087449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.087534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.087560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.087679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.087706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.087845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.087871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.087979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.088006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.088122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.088149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.088248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.088288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.088437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.088464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.088575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.088601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.088713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.088738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.088844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.088870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.088958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.088984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.089075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.089103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.089187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.089215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.089341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.089370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.089483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.089509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.089624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.089651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.089731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.089759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.089844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.089871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.089988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.090015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.090129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.090156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.090247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.090274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.090371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.090398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.090478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.090505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.090620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.090647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.090790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.090817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.090930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.090956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.091105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.091131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.091220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.091247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.091364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.091391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.328 [2024-11-20 09:17:09.091530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.328 [2024-11-20 09:17:09.091557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.328 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.091673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.091699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.091817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.091843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.091965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.091991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.092102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.092129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.092234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.092260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.092363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.092390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.092479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.092505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.092595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.092622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.092708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.092734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.092875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.092909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.093032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.093059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.093176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.093203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.093281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.093316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.093428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.093454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.093537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.093564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.093681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.093708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.093782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.093809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.093921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.093947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.094022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.094048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.094148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.094187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.094324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.094352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.094471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.094498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.094608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.094634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.094723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.094749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.094866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.094892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.095002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.095027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.095141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.095166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.095242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.095268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.095362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.095388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.095492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.095531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.095626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.095655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.095743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.095769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.095860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.095886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.095997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.096024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.096121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.096150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.096272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.096299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.096402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.096430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.096547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.096574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.096688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.096715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.096837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.096863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.329 [2024-11-20 09:17:09.097002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.329 [2024-11-20 09:17:09.097029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.329 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.097134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.097173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.097294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.097329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.097453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.097480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.097570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.097595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.097737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.097764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.097854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.097880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.097997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.098024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.098114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.098141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.098293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.098352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.098454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.098483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.098599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.098626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.098714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.098739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.098878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.098904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.098988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.099014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.099092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.099118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.099231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.099259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.099383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.099410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.099519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.099544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.099671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.099697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.099809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.099835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.099952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.099978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.100055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.100080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.100222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.100248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.100369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.100396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.100484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.100509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.100597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.100623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.100707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.100733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.100836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.100862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.100978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.101003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.101128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.101168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.101270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.101298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.101448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.101475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.101587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.101614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.101701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.101728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.101817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.101843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.101923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.101955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.102069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.102095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.102175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.102202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.102320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.102348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.102453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.102492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.330 [2024-11-20 09:17:09.102639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.330 [2024-11-20 09:17:09.102666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.330 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.102780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.102806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.102921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.102947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.103031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.103057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.103144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.103170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.103249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.103275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.103378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.103405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.103481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.103505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.103584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.103608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.103692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.103717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.103825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.103850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.103960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.103988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.104072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.104098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.104185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.104211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.104286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.104319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.104401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.104428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.104510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.104537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.104655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.104681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.104775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.104801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.104954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.104992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.105113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.105141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.105278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.105312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.105425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.105452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.105573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.105601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.105708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.105734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.105850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.105899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.106041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.106089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.106203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.106231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.106350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.106377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.106514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.106540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.106632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.106659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.106777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.106803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.106946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.106972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.107088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.107114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.107190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.107216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.107328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.107353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.331 qpair failed and we were unable to recover it. 00:26:32.331 [2024-11-20 09:17:09.107446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.331 [2024-11-20 09:17:09.107472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.107558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.107583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.107691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.107719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.107799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.107825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.107932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.107958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.108049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.108076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.108161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.108188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.108267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.108294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.108444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.108471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.108561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.108587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.108692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.108718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.108807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.108834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.108947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.108973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.109089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.109116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.109222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.109249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.109390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.109416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.109502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.109528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.109614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.109641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.109812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.109861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.109973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.109999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.110116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.110142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.110261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.110287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.110418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.110444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.110556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.110582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.110714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.110750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.110937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.110989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.111105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.111138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.111280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.111313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.111406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.111432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.111548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.111574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.111713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.111740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.111823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.111849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.111994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.112044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.112133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.112158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.112242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.112268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.112388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.112414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.112499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.112525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.112639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.112665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.112755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.112780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.112891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.112916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.113062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.332 [2024-11-20 09:17:09.113088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.332 qpair failed and we were unable to recover it. 00:26:32.332 [2024-11-20 09:17:09.113206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.113232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.113353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.113380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.113488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.113514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.113610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.113638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.113757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.113782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.113871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.113897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.114009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.114035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.114114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.114141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.114261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.114300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.114413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.114442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.114553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.114579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.114659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.114686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.114804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.114836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.114946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.114973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.115113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.115139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.115222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.115248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.115337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.115363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.115474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.115500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.115606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.115632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.115713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.115740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.115872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.115918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.116034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.116060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.116175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.116201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.116295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.116337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.116446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.116472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.116585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.116611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.116753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.116779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.116927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.116952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.117033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.117058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.117181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.117206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.117342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.117368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.117449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.117474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.117588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.117614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.117698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.117723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.117852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.117878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.118017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.118044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.118164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.118189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.118308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.118334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.118460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.118486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.118569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.118595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.333 [2024-11-20 09:17:09.118744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.333 [2024-11-20 09:17:09.118770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.333 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.118891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.118918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.119036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.119062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.119140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.119165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.119279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.119310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.119425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.119450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.119566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.119591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.119708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.119734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.119810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.119835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.119930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.119956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.120042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.120067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.120188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.120213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.120340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.120367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.120497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.120536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.120668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.120696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.120811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.120838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.120922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.120949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.121056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.121163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.121190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.121278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.121319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.121399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.121426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.121515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.121543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.121638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.121665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.121784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.121810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.121899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.121926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.122013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.122038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.122149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.122175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.122300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.122342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.122442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.122470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.122598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.122636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.122749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.122775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.122867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.122893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.122981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.123007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.123095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.123120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.123229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.123254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.123382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.123411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.123493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.123520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.123623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.123653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.123739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.123767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.123861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.123891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.124017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.124044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.334 [2024-11-20 09:17:09.124164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.334 [2024-11-20 09:17:09.124191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.334 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.124279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.124310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.124402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.124429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.124517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.124543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.124655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.124681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.124765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.124791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.124901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.124927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.125075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.125100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.125177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.125203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.125318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.125344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.125424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.125450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.125533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.125558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.125678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.125708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.125834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.125859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.125969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.125994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.126083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.126108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.126189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.126215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.126333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.126372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.126480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.126519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.126629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.126667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.126767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.126793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.126913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.126939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.127018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.127043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.127131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.127157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.127270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.127296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.127416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.127442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.127535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.127561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.127665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.127691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.127801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.127826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.127974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.128004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.128094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.128121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.128200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.128226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.128364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.128391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.128506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.128531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.128643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.128669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.128750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.128776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.128910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.128936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.335 [2024-11-20 09:17:09.129018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.335 [2024-11-20 09:17:09.129044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.335 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.129132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.129160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.129287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.129341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.129434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.129462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.129543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.129570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.129657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.129684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.129803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.129829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.129946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.129973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.130080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.130106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.130261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.130315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.130446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.130475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.130600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.130627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.130764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.130790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.130899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.130925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.131012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.131039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.131140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.131179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.131275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.131309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.131404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.131430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.131521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.131547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.131665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.131692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.131800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.131826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.131921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.131948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.132062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.132088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.132216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.132255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.132384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.132412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.132504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.132530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.132649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.132675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.132787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.132812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.132902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.132927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.133041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.133072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.133147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.133172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.133255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.133281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.133399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.133427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.133514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.133541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.133629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.133655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.133772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.133799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.133908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.133934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.134016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.134043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.134135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.134163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.134245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.134273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.134369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.134395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.336 [2024-11-20 09:17:09.134483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.336 [2024-11-20 09:17:09.134509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.336 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.134622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.134648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.134731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.134757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.134865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.134891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.134975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.135001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.135086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.135126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.135221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.135249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.135346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.135373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.135458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.135485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.135567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.135592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.135673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.135699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.135812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.135839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.135929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.135958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.136090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.136137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.136271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.136317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.136445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.136472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.136554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.136579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.136691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.136717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.136799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.136824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.136933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.136959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.137067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.137094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.137186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.137214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.137312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.137341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.137430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.137456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.137538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.137564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.137646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.137673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.137750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.137776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.137913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.137939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.138049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.138080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.138170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.138199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.138279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.138314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.138400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.138426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.138543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.138569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.138657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.138682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.138792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.138817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.138893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.138921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.139013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.139040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.139162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.139189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.139273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.139300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.139431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.139458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.139572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.139598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.337 [2024-11-20 09:17:09.139718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.337 [2024-11-20 09:17:09.139744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.337 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.139842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.139869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.139957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.139983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.140068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.140095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.140205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.140230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.140329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.140355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.140451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.140477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.140563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.140589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.140664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.140689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.140796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.140821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.140934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.140960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.141045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.141070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.141147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.141174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.141318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.141346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.141432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.141463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.141555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.141582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.141690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.141717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.141803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.141830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.141942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.141969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.142060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.142087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.142202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.142229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.142320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.142348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.142433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.142460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.142546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.142572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.142660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.142687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.142800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.142827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.142922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.142962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.143086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.143114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.143212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.143239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.143330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.143358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.143466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.143492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.143576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.143602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.143691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.143718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.143802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.143830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.143951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.143979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.144067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.144093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.144207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.144233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.144352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.144379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.144496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.144524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.144660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.144699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.144820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.144849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.338 qpair failed and we were unable to recover it. 00:26:32.338 [2024-11-20 09:17:09.144937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.338 [2024-11-20 09:17:09.144964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.145105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.145131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.145272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.145298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.145419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.145445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.145535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.145561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.145673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.145700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.145841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.145867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.146015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.146042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.146124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.146150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.146268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.146294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.146390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.146416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.146502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.146528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.146629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.146656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.146735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.146766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.146879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.146905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.146989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.147017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.147109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.147135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.147230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.147269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.147378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.147406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.147543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.147569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.147707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.147732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.147851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.147878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.147963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.147992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.148075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.148103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.148203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.148231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.148333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.148360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.148460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.148487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.148574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.148601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.148713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.148740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.148829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.148857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.148946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.148973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.149113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.149139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.149226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.149253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.149339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.149367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.149451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.149478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.149567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.149593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.149701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.149727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.149819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.149845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.149957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.149985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.150098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.150126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.339 qpair failed and we were unable to recover it. 00:26:32.339 [2024-11-20 09:17:09.150230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.339 [2024-11-20 09:17:09.150269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.150424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.150451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.150535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.150561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.150676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.150703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.150798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.150824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.150910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.150936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.151046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.151085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.151181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.151209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.151329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.151357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.151435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.151462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.151551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.151577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.151667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.151693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.151778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.151804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.151914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.151946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.152029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.152057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.152151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.152180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.152317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.152357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.152446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.152474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.152586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.152617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.152753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.152780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.152865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.152890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.152999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.153024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.153104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.153130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.153212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.153239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.153345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.153374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.153459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.153486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.153608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.153637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.153758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.153784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.153871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.153897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.153979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.154007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.154105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.154132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.154258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.154298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.154430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.154458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.154546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.154572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.154676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.154702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.154792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.154818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.154900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.340 [2024-11-20 09:17:09.154927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.340 qpair failed and we were unable to recover it. 00:26:32.340 [2024-11-20 09:17:09.155045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.155073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.155148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.155175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.155263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.155292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.155393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.155427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.155555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.155594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.155686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.155714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.155834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.155860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.155949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.155976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.156093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.156119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.156200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.156226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.156326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.156354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.156451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.156477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.156563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.156590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.156683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.156709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.156829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.156854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.156939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.156965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.157042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.157068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.157165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.157194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.157296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.157345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.157439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.157468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.157561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.157588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.157675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.157702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.157795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.157824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.157910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.157937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.158019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.158045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.158128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.158153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.158265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.158291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.158380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.158405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.158493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.158518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.158641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.158666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.158758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.158787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.158880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.158908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.159031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.159059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.159145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.159171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.159258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.159284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.159388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.159414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.159500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.159526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.159621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.159647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.159736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.159764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.159857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.159884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.159969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.159997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.160093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.160119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.160211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.160237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.160331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.160364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.160480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.160508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.160601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.160627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.160712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.341 [2024-11-20 09:17:09.160738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.341 qpair failed and we were unable to recover it. 00:26:32.341 [2024-11-20 09:17:09.160827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.160855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.160996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.161022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.161112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.161139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.161221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.161249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.161343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.161370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.161479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.161518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.161662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.161709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.161811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.161861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.162002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.162049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.162134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.162160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.162286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.162320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.162410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.162436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.162519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.162547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.162648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.162687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.162789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.162817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.162900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.162926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.163005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.163031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.163146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.163172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.163268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.163294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.163394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.163420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.163510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.163536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.163655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.163690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.163837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.163870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.164009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.164051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.164166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.164194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.164275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.164301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.164397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.164424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.164510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.164536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.164652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.164699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.164863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.164910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.164994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.165019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.165117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.165156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.165262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.165301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.165435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.165462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.165581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.165619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.165760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.165804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.165940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.165986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.166083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.166110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.166203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.166228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.166323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.166350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.166438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.166465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.166567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.166597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.166682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.166709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.166821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.166848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.166934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.166960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.167064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.167104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.167205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.167233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.167337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.167364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.167451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.167477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.167556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.167583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.167719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.167750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.167865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.167913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.168002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.168033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.168129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.168156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.168243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.168269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.168365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.168393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.168478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.168505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.168606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.168632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.168715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.168742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.168860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.168886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.342 [2024-11-20 09:17:09.168982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.342 [2024-11-20 09:17:09.169008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.342 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.169109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.169135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.169224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.169253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.169378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.169418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.169549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.169577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.169680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.169707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.169826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.169853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.169943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.169970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.170048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.170074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.170195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.170235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.170343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.170382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.170509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.170536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.170651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.170677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.170769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.170795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.170910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.170944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.171094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.171147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.171252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.171292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.171423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.171452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.171536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.171563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.171685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.171760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.171893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.171953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.172088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.172158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.172298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.172333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.172414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.172441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.172556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.172583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.172669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.172696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.172834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.172878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.173020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.173053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.173174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.173214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.173352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.173381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.343 [2024-11-20 09:17:09.173474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.343 [2024-11-20 09:17:09.173506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.343 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.173592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.173619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.173701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.173749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.173846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.173872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.173958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.173984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.174108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.174147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.174268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.174296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.174418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.174444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.174530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.174556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.174684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.174723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.174818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.174847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.174934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.174982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.175117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.175163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.175314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.175341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.175431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.175457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.175542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.175569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.175690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.175717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.175830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.175856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.175969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.176003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.176128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.176155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.176260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.176286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.176424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.176463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.176580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.176607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.176700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.176726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.176866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.176912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.177019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.177053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.177176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.177272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.177301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.177403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.177429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.177560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.177605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.177749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.177777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.177870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.177897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.177987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.178015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.178101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.178128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.178246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.178271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.178384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.178410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.178491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.178517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.178642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.178690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.178792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.178828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.178927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.178962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.344 qpair failed and we were unable to recover it. 00:26:32.344 [2024-11-20 09:17:09.179100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.344 [2024-11-20 09:17:09.179133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.179249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.179276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.179396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.179423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.179517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.179543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.179619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.179645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.179788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.179820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.179961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.180010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.180120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.180153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.180264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.180290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.180404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.180430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.180561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.180600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.180715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.180742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.180858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.180903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.181033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.181080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.181207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.181233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.181329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.181356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.181461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.181487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.181570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.181596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.181692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.181718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.181801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.181829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.181916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.181945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.182067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.182094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.182211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.182237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.182328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.182355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.182446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.182472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.182611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.182638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.182740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.182772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.182932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.182984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.183066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.183091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.183183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.183209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.183336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.183375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.183502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.183529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.183622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.183648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.183756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.183782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.183914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.183946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.184085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.184117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.184259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.184285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.184373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.184400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.184482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.184508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.184595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.184621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.184737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.184763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.184878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.184923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.185142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.185190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.185275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.185301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.185429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.185456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.185542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.185568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.185651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.185677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.185774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.185807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.185952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.185999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.186090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.186116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.186205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.186230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.186325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.186353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.186451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.186489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.186628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.186656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.186746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.186779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.186903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.186929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.187016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.187041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.187129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.187157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.187263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.187290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.187385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.187411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.187524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.187549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.187633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.187659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.345 [2024-11-20 09:17:09.187773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.345 [2024-11-20 09:17:09.187799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.345 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.187895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.187921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.188010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.188036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.188125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.188151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.188239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.188265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.188368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.188408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.188527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.188566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.188661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.188689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.188782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.188809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.188893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.188919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.189001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.189027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.189105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.189131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.189243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.189269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.189372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.189411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.189502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.189529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.189612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.189638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.189772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.189820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.189899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.189924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.190009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.190036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.190114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.190146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.190233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.190263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.190362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.190389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.190484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.190510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.190594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.190620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.190706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.190732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.190854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.190880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.190980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.191013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.191119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.191145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.191240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.191280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.191406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.191433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.191531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.191558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.191645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.191671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.191784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.191819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.191961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.191994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.192105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.192139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.192259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.192288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.192396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.192423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.192507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.192533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.192644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.192669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.192771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.192803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.192930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.192956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.193095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.193122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.193218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.193256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.193387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.193416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.193495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.193522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.193605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.193632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.193733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.193760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.193861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.193895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.194000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.194035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.194170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.194203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.194323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.194350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.194437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.194464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.194586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.194612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.194754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.194780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.194896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.194941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.346 qpair failed and we were unable to recover it. 00:26:32.346 [2024-11-20 09:17:09.195086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.346 [2024-11-20 09:17:09.195137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.195233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.195258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.195369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.195395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.195474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.195500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.195587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.195620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.195731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.195758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.195838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.195865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.195957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.195983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.196099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.196127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.196232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.196271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.196371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.196398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.196512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.196537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.196618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.196643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.196729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.196755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.196860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.196886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.196979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.197004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.197097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.197136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.197233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.197261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.197390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.197417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.197504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.197529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.197633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.197659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.197754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.197781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.197866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.197893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.197983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.198009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.198092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.198121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.198206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.198232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.198320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.198348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.198462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.198491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.198607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.198642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.198785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.198818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.198924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.198971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.199109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.199140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.199233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.199259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.199380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.199406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.199501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.199534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.199687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.347 [2024-11-20 09:17:09.199733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.347 qpair failed and we were unable to recover it. 00:26:32.347 [2024-11-20 09:17:09.199857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.199902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.199996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.200024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.200113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.200140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.200229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.200257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.200350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.200378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.200468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.200495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.200614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.200642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.200753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.200788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.200895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.200928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.201072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.201107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.201250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.201278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.201390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.201430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.201522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.201551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.201665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.201699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.201896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.201929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.202047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.202080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.202175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.202208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.202388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.202427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.202525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.202553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.202670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.202696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.202784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.202810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.202895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.202922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.203067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.203093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.203214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.203241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.203328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.203354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.203444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.203470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.203584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.203611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.203699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.203726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.203821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.203849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.203943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.203970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.204075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.204101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.204183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.204210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.204311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.204340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.204433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.204460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.204553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.204582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.204696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.204728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.204824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.204851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.204937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.204964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.205059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.205085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.205170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.205196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.205286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.205319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.205417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.205444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.205537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.205564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.205674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.205701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.205793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.205820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.205961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.205995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.206130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.206163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.206272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.206298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.206407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.206435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.348 qpair failed and we were unable to recover it. 00:26:32.348 [2024-11-20 09:17:09.206533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.348 [2024-11-20 09:17:09.206559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.206643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.206670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.206803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.206836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.206936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.206969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.207093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.207119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.207278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.207310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.207427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.207454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.207544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.207572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.207679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.207713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.207837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.207880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.208009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.208042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.208156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.208190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.208342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.208382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.208515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.208554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.208649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.208676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.208819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.208846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.208959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.208985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.209083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.209109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.209212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.209240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.209352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.209380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.209472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.209512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.209636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.209663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.209757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.209783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.209896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.209923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.210005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.210031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.210132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.210176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.210277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.210320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.210410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.210437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.210553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.210579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.210706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.210739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.210869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.210895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.211012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.211039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.211150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.211190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.211278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.211315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.211400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.211426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.211503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.211529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.211617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.211642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.211726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.211753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.211858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.211886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.211975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.349 [2024-11-20 09:17:09.212001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.349 qpair failed and we were unable to recover it. 00:26:32.349 [2024-11-20 09:17:09.212143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.212170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.212243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.212269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.212371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.212411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.212536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.212564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.212640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.212666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.212782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.212809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.212946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.212980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.213112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.213149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.213312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.213361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.213457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.213485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.213566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.213592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.213666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.213692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.213826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.213873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.213965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.213998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.214121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.214155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.214263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.214299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.214436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.214463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.214575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.214602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.214692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.214718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.214804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.214832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.214920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.214947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.215075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.215115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.215203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.215230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.215320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.215348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.215438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.215464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.215565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.215598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.215748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.215781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.215917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.215961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.216052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.216077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.216170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.216196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.216283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.216315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.216429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.216454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.216562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.216587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.216678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.216703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.216814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.216841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.216918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.216944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.217087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.217112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.217218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.217258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.217380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.217420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.217544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.217572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.350 [2024-11-20 09:17:09.217714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.350 [2024-11-20 09:17:09.217762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.350 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.217844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.217870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.217984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.218030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.218113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.218139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.218279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.218330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.218428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.218456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.218618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.218651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.218821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.218855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.218961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.218994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.219132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.219160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.219242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.219268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.219363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.219390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.219501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.219527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.219615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.219641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.219734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.219760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.219876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.219904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.220021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.220048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.220158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.220197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.220286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.220321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.220443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.220470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.220565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.220591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.220706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.220732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.220827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.220860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.220969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.221021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.221159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.221187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.221276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.221310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.221428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.221455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.221553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.221581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.221704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.221730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.221831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.221863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.222016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.222059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.222172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.222197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.222338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.222365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.222471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.222496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.222588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.222614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.222722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.222749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.222872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.222897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.223011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.223037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.223130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.351 [2024-11-20 09:17:09.223159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.351 qpair failed and we were unable to recover it. 00:26:32.351 [2024-11-20 09:17:09.223266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.223315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.223415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.223449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.223532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.223559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.223698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.223730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.223939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.223970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.224077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.224104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.224196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.224222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.224317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.224344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.224438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.224463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.224556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.224582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.224662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.224687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.224774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.224799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.224879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.224907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.225020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.225046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.225128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.225154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.225253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.225282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.225378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.225405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.225494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.225523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.225617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.225644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.225722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.225748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.225862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.225889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.226022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.226071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.226167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.352 [2024-11-20 09:17:09.226206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.352 qpair failed and we were unable to recover it. 00:26:32.352 [2024-11-20 09:17:09.226296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.226339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.226426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.226452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.226598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.226644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.226782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.226828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.226966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.227000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.227124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.227151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.227291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.227343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.227433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.227460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.227545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.227571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.227702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.227735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.227867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.227915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.228028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.228055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.228169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.228196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.228330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.228370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.228492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.228520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.353 qpair failed and we were unable to recover it. 00:26:32.353 [2024-11-20 09:17:09.228617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.353 [2024-11-20 09:17:09.228650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.228773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.228822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.228902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.228928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.229031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.229063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.229198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.229224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.229350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.229377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.229465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.229490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.229620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.229660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.229757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.229785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.229872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.229899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.229986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.230013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.230125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.230152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.230259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.230299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.230406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.230432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.230517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.354 [2024-11-20 09:17:09.230543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.354 qpair failed and we were unable to recover it. 00:26:32.354 [2024-11-20 09:17:09.230627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.230654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.230761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.230787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.230899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.230925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.231046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.231075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.231177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.231217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.231323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.231361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.231474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.231502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.231586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.231612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.231730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.231757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.231872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.231899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.231989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.232017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.232132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.232159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.232251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.355 [2024-11-20 09:17:09.232278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.355 qpair failed and we were unable to recover it. 00:26:32.355 [2024-11-20 09:17:09.232387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.356 [2024-11-20 09:17:09.232415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.356 qpair failed and we were unable to recover it. 00:26:32.356 [2024-11-20 09:17:09.232493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.356 [2024-11-20 09:17:09.232519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.356 qpair failed and we were unable to recover it. 00:26:32.356 [2024-11-20 09:17:09.232619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.356 [2024-11-20 09:17:09.232659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.356 qpair failed and we were unable to recover it. 00:26:32.356 [2024-11-20 09:17:09.232855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.356 [2024-11-20 09:17:09.232887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.356 qpair failed and we were unable to recover it. 00:26:32.356 [2024-11-20 09:17:09.232996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.356 [2024-11-20 09:17:09.233028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.356 qpair failed and we were unable to recover it. 00:26:32.356 [2024-11-20 09:17:09.233181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.356 [2024-11-20 09:17:09.233210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.356 qpair failed and we were unable to recover it. 00:26:32.356 [2024-11-20 09:17:09.233317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.356 [2024-11-20 09:17:09.233356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.356 qpair failed and we were unable to recover it. 00:26:32.356 [2024-11-20 09:17:09.233456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.356 [2024-11-20 09:17:09.233482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.356 qpair failed and we were unable to recover it. 00:26:32.356 [2024-11-20 09:17:09.233572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.356 [2024-11-20 09:17:09.233599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.356 qpair failed and we were unable to recover it. 00:26:32.356 [2024-11-20 09:17:09.233737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.233783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.233901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.233945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.234084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.234111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.234263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.234310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.234405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.234433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.234522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.234549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.234679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.234729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.234853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.234897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.235061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.235106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.235226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.357 [2024-11-20 09:17:09.235254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.357 qpair failed and we were unable to recover it. 00:26:32.357 [2024-11-20 09:17:09.235346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.235373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.235465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.235492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.235580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.235606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.235699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.235726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.235885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.235931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.236046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.236072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.236185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.236210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.236340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.236380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.236481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.236509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.236612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.236652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.358 qpair failed and we were unable to recover it. 00:26:32.358 [2024-11-20 09:17:09.236788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.358 [2024-11-20 09:17:09.236815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.236904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.359 [2024-11-20 09:17:09.236930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.237041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.359 [2024-11-20 09:17:09.237067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.237181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.359 [2024-11-20 09:17:09.237206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.237288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.359 [2024-11-20 09:17:09.237320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.237407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.359 [2024-11-20 09:17:09.237434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.237526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.359 [2024-11-20 09:17:09.237552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.237696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.359 [2024-11-20 09:17:09.237721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.237836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.359 [2024-11-20 09:17:09.237863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.237950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.359 [2024-11-20 09:17:09.237975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.359 qpair failed and we were unable to recover it. 00:26:32.359 [2024-11-20 09:17:09.238101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.238140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.238230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.238257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.238393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.238432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.238533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.238562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.238667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.238695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.238837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.238867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.239037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.239084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.239178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.239208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.239300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.239334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.239453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.360 [2024-11-20 09:17:09.239479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.360 qpair failed and we were unable to recover it. 00:26:32.360 [2024-11-20 09:17:09.239589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.239619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-11-20 09:17:09.239763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.239792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-11-20 09:17:09.239929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.239955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-11-20 09:17:09.240066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.240092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-11-20 09:17:09.240201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.240241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-11-20 09:17:09.240344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.240373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-11-20 09:17:09.240470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.240497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-11-20 09:17:09.240592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.240618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-11-20 09:17:09.240731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.240757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.361 [2024-11-20 09:17:09.240842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.361 [2024-11-20 09:17:09.240870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.361 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.240972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.241003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.241117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.241157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.241277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.241323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.241411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.241438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.241517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.241543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.241665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.241691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.241847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.241877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.241989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.242035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.242182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.242221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.242370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.242399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.242489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.242520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.242628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.242660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.242845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.242875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.242972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.243002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.243155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.243186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.243296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.362 [2024-11-20 09:17:09.243328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.362 qpair failed and we were unable to recover it. 00:26:32.362 [2024-11-20 09:17:09.243415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.363 [2024-11-20 09:17:09.243442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.363 qpair failed and we were unable to recover it. 00:26:32.363 [2024-11-20 09:17:09.243527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.363 [2024-11-20 09:17:09.243552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.363 qpair failed and we were unable to recover it. 00:26:32.363 [2024-11-20 09:17:09.243658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.363 [2024-11-20 09:17:09.243702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.363 qpair failed and we were unable to recover it. 00:26:32.363 [2024-11-20 09:17:09.243872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.363 [2024-11-20 09:17:09.243915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.363 qpair failed and we were unable to recover it. 00:26:32.363 [2024-11-20 09:17:09.244024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.363 [2024-11-20 09:17:09.244067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.363 qpair failed and we were unable to recover it. 00:26:32.363 [2024-11-20 09:17:09.244197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.363 [2024-11-20 09:17:09.244236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.363 qpair failed and we were unable to recover it. 00:26:32.364 [2024-11-20 09:17:09.244338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.364 [2024-11-20 09:17:09.244367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.364 qpair failed and we were unable to recover it. 00:26:32.364 [2024-11-20 09:17:09.244453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.364 [2024-11-20 09:17:09.244479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.364 qpair failed and we were unable to recover it. 00:26:32.364 [2024-11-20 09:17:09.244618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-11-20 09:17:09.244661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.365 qpair failed and we were unable to recover it. 00:26:32.365 [2024-11-20 09:17:09.244768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-11-20 09:17:09.244799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.365 qpair failed and we were unable to recover it. 00:26:32.365 [2024-11-20 09:17:09.244945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-11-20 09:17:09.244977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.365 qpair failed and we were unable to recover it. 00:26:32.365 [2024-11-20 09:17:09.245090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-11-20 09:17:09.245135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.365 qpair failed and we were unable to recover it. 00:26:32.365 [2024-11-20 09:17:09.245271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-11-20 09:17:09.245297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.365 qpair failed and we were unable to recover it. 00:26:32.365 [2024-11-20 09:17:09.245420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-11-20 09:17:09.245446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.365 qpair failed and we were unable to recover it. 00:26:32.365 [2024-11-20 09:17:09.245532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-11-20 09:17:09.245559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.365 qpair failed and we were unable to recover it. 00:26:32.365 [2024-11-20 09:17:09.245664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-11-20 09:17:09.245690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.365 qpair failed and we were unable to recover it. 00:26:32.365 [2024-11-20 09:17:09.245789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.365 [2024-11-20 09:17:09.245819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.366 qpair failed and we were unable to recover it. 00:26:32.366 [2024-11-20 09:17:09.245931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-11-20 09:17:09.245960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.366 qpair failed and we were unable to recover it. 00:26:32.366 [2024-11-20 09:17:09.246074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-11-20 09:17:09.246100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.366 qpair failed and we were unable to recover it. 00:26:32.366 [2024-11-20 09:17:09.246210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-11-20 09:17:09.246237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.366 qpair failed and we were unable to recover it. 00:26:32.366 [2024-11-20 09:17:09.246329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-11-20 09:17:09.246356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.366 qpair failed and we were unable to recover it. 00:26:32.366 [2024-11-20 09:17:09.246440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-11-20 09:17:09.246471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.366 qpair failed and we were unable to recover it. 00:26:32.366 [2024-11-20 09:17:09.246569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-11-20 09:17:09.246598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.366 qpair failed and we were unable to recover it. 00:26:32.366 [2024-11-20 09:17:09.246716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-11-20 09:17:09.246745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.366 qpair failed and we were unable to recover it. 00:26:32.366 [2024-11-20 09:17:09.246834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.366 [2024-11-20 09:17:09.246864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.246974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.247018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.247157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.247185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.247275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.247308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.247399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.247427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.247524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.247551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.247690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.247716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.247821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.247853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.248015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.248044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.248168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.248198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.248318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.367 [2024-11-20 09:17:09.248345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.367 qpair failed and we were unable to recover it. 00:26:32.367 [2024-11-20 09:17:09.248438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.368 [2024-11-20 09:17:09.248464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.368 qpair failed and we were unable to recover it. 00:26:32.368 [2024-11-20 09:17:09.248542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.368 [2024-11-20 09:17:09.248568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.368 qpair failed and we were unable to recover it. 00:26:32.368 [2024-11-20 09:17:09.248681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.368 [2024-11-20 09:17:09.248708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.368 qpair failed and we were unable to recover it. 00:26:32.368 [2024-11-20 09:17:09.248806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.368 [2024-11-20 09:17:09.248836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.368 qpair failed and we were unable to recover it. 00:26:32.368 [2024-11-20 09:17:09.248946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.368 [2024-11-20 09:17:09.248990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.368 qpair failed and we were unable to recover it. 00:26:32.368 [2024-11-20 09:17:09.249102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.368 [2024-11-20 09:17:09.249130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.368 qpair failed and we were unable to recover it. 00:26:32.368 [2024-11-20 09:17:09.249229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.368 [2024-11-20 09:17:09.249267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.368 qpair failed and we were unable to recover it. 00:26:32.368 [2024-11-20 09:17:09.249366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.368 [2024-11-20 09:17:09.249394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.368 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.249501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.249528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.249651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.249680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.249858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.249886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.250025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.250069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.250153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.250180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.250272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.250300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.250406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.250436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.250553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.250598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.250751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.250781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.250862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.250891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.250981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.251011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.251134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.251174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.251335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.251364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.369 [2024-11-20 09:17:09.251487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.369 [2024-11-20 09:17:09.251515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.369 qpair failed and we were unable to recover it. 00:26:32.370 [2024-11-20 09:17:09.251652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.370 [2024-11-20 09:17:09.251695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.370 qpair failed and we were unable to recover it. 00:26:32.370 [2024-11-20 09:17:09.251824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.370 [2024-11-20 09:17:09.251852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.370 qpair failed and we were unable to recover it. 00:26:32.370 [2024-11-20 09:17:09.251988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.370 [2024-11-20 09:17:09.252034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.370 qpair failed and we were unable to recover it. 00:26:32.370 [2024-11-20 09:17:09.252126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.370 [2024-11-20 09:17:09.252153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.370 qpair failed and we were unable to recover it. 00:26:32.370 [2024-11-20 09:17:09.252259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.370 [2024-11-20 09:17:09.252317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.370 qpair failed and we were unable to recover it. 00:26:32.370 [2024-11-20 09:17:09.252419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.370 [2024-11-20 09:17:09.252447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.370 qpair failed and we were unable to recover it. 00:26:32.370 [2024-11-20 09:17:09.252546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.370 [2024-11-20 09:17:09.252572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.370 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.252662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.252689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.252800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.252826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.252925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.252951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.253029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.253055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.253139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.253166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.253272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.253297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.253415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.253441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.253528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.253554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.253664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.253690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.371 [2024-11-20 09:17:09.253817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.371 [2024-11-20 09:17:09.253843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.371 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.253927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.253953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.254095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.254121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.254260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.254299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.254447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.254487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.254597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.254636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.254722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.254751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.254870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.254899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.255033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.255058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.255175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.255201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.255337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.255363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.372 [2024-11-20 09:17:09.255447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.372 [2024-11-20 09:17:09.255472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.372 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.255562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.255589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.255705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.255731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.255848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.255873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.255957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.255988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.256093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.256133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.256254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.256282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.256386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.256426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.256518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.256545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.256661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.256687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.256794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.256820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.256898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.256924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.373 qpair failed and we were unable to recover it. 00:26:32.373 [2024-11-20 09:17:09.257014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.373 [2024-11-20 09:17:09.257042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.257137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.257177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.257269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.257296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.257391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.257418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.257511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.257537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.257643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.257672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.257787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.257814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.257959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.257988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.258090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.258121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.258264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.258290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.258396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.258422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.258503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.258529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.258614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.258640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.258739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.258766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.374 qpair failed and we were unable to recover it. 00:26:32.374 [2024-11-20 09:17:09.258908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.374 [2024-11-20 09:17:09.258937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.259045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.259071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.259198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.259224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.259328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.259356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.259448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.259474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.259559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.259591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.259686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.259713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.259822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.259852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.259970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.260000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.260152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.260182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.260290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.375 [2024-11-20 09:17:09.260324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.375 qpair failed and we were unable to recover it. 00:26:32.375 [2024-11-20 09:17:09.260417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.260444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.260530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.260557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.260637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.260664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.260753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.260780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.260918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.260947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.261067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.261097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.261214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.261253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.261364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.261392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.261501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.261528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.261618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.261645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.261755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.376 [2024-11-20 09:17:09.261781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.376 qpair failed and we were unable to recover it. 00:26:32.376 [2024-11-20 09:17:09.261863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.261889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.262025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.262052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.262176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.262215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.262322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.262361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.262460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.262487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.262577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.262603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.262712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.262741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.262855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.262882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.263045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.263075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.263201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.377 [2024-11-20 09:17:09.263241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.377 qpair failed and we were unable to recover it. 00:26:32.377 [2024-11-20 09:17:09.263353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.263393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.263516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.263544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.263653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.263699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.263816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.263845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.263948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.263977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.264091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.264130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.264223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.264250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.264369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.264397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.264489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.264515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.264601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.264627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.264703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.264728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.264820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.264848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.264946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.264972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.265056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.265089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.265167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.265192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.265353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.378 [2024-11-20 09:17:09.265393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.378 qpair failed and we were unable to recover it. 00:26:32.378 [2024-11-20 09:17:09.265513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.265541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.265679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.265709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.265808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.265838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.265931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.265962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.266071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.266098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.266187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.266213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.266311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.266339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.266454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.266480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.266630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.266676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.266784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.266811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.266923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.266953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.267111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.379 [2024-11-20 09:17:09.267157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.379 qpair failed and we were unable to recover it. 00:26:32.379 [2024-11-20 09:17:09.267260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.267299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.267402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.267431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.267550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.267578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.267709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.267739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.267845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.267877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.267980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.268011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.268107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.268138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.268272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.268300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.268400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.268426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.268529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.268556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.268643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.268669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.268760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.268785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.380 [2024-11-20 09:17:09.268884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.380 [2024-11-20 09:17:09.268917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.380 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.269054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.269099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.269180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.269208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.269296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.269329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.269418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.269444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.269542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.269568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.269652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.269699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.269811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.269857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.269955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.269985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.270085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.270118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.270284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.270334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.270434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.270460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.270572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.270598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.270726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.270757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.270894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.270920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.271002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.271028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.271132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.271172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.271310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.271349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.271448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.381 [2024-11-20 09:17:09.271476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.381 qpair failed and we were unable to recover it. 00:26:32.381 [2024-11-20 09:17:09.271565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.271592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.271675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.271701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.271803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.271831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.271952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.271981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.272100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.272139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.272262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.272309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.272402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.272431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.272517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.272546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.272669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.272696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.272814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.272843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.272925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.272952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.273089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.273128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.273219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.273247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.273344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.273372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.273490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.273517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.273628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.273658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.273766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.273811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.273965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.274011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.274165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.382 [2024-11-20 09:17:09.274193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.382 qpair failed and we were unable to recover it. 00:26:32.382 [2024-11-20 09:17:09.274291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.274329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.274414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.274440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.274517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.274548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.274673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.274699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.274835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.274865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.274975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.275019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.275150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.275177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.275291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.275325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.275441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.275466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.275596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.275635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.275721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.275748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.275858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.275892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.275997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.276045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.276154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.276200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.276340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.276367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.276453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.276480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.276569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.276596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.276708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.276737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.276829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.276858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.276970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.277002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.277152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.277183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.277289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.277325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.277414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.277440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.277524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.277549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.277654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.277682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.277785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.277811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.277898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.277924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.278002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.278030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.278106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.278132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.278213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.278242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.278347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.278375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.278488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.278515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.278598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.278628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.278741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.278768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.278900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.278947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.279064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.279091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.279204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.279230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.279323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.279350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.279455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.279485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.279637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.279667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.383 [2024-11-20 09:17:09.279782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.383 [2024-11-20 09:17:09.279813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.383 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.279912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.279939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.280026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.280054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.280150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.280176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.280259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.280286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.280373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.280399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.280475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.280500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.280596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.280622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.280733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.280779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.280868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.280894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.280983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.281011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.281124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.281152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.281270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.281300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.281396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.281423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.281512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.281538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.281621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.281647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.683 [2024-11-20 09:17:09.281748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.683 [2024-11-20 09:17:09.281774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.683 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.281879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.281909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.282028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.282057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.282158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.282184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.282273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.282298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.282393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.282419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.282534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.282560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.282638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.282664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.282747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.282773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.282864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.282891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.283007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.283035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.283149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.283174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.283291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.283327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.283412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.283443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.283555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.283581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.283672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.283697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.283783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.283809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.283941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.283966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.284056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.284085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.284169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.284196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.284272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.284298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.284420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.284446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.284531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.284562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.284647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.284674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.284769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.284796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.284916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.284944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.285064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.285091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.285185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.285213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.285300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.285340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.285425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.285452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.285569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.285625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.285804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.285853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.285981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.286011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.286104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.286133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.286255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.286294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.286397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.286425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.286506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.286532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.286622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.286650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.684 [2024-11-20 09:17:09.286736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.684 [2024-11-20 09:17:09.286762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.684 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.286842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.286871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.286982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.287016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.287101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.287128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.287212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.287241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.287395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.287423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.287538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.287564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.287704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.287729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.287812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.287838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.287921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.287947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.288038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.288065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.288177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.288204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.288301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.288335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.288451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.288476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.288565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.288590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.288704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.288730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.288825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.288851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.288932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.288957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.289093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.289118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.289201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.289227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.289346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.289372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.289450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.289476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.289574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.289601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.289689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.289715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.289795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.289820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.289913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.289938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.290052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.290081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.290230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.290257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.290352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.290380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.290491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.290536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.290653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.290678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.290799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.290824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.290935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.290961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.291053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.291079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.291229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.291268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.291363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.291390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.291500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.291526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.291608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.291633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.685 qpair failed and we were unable to recover it. 00:26:32.685 [2024-11-20 09:17:09.291747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.685 [2024-11-20 09:17:09.291773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.291859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.291886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.291969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.291995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.292145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.292171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.292258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.292285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.292385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.292413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.292544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.292583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.292678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.292708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.292799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.292827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.292907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.292933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.293048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.293073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.293160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.293186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.293275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.293313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.293423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.293469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.293599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.293644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.293773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.293818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.293924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.293949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.294066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.294091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.294179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.294207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.294287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.294320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.294409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.294435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.294525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.294552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.294636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.294662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.294745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.294773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.294847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.294872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.294985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.295011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.295100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.295127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.295243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.295268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.295408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.295447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.295550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.295589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.295690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.295719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.295829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.295861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.295963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.295990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.296077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.296106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.296197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.296224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.296322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.296353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.296459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.296498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.296628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.686 [2024-11-20 09:17:09.296658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.686 qpair failed and we were unable to recover it. 00:26:32.686 [2024-11-20 09:17:09.296802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.296847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.296983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.297028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.297118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.297146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.297235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.297262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.297398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.297438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.297540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.297567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.297731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.297757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.297885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.297920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.298033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.298081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.298184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.298210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.298298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.298334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.298427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.298453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.298536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.298562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.298714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.298741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.298855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.298880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.298963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.298988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.299073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.299099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.299209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.299234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.299355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.299382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.299475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.299514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.299607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.299640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.299756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.299782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.299894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.299921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.300000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.300027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.300122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.300161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.300253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.300280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.300380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.300420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.300523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.300552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.300643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.300670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.300772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.300805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.300942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.300973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.301065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.301097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.301214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.301240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.301336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.301364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.301462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.301488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.301592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.301623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.301723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.301751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.687 qpair failed and we were unable to recover it. 00:26:32.687 [2024-11-20 09:17:09.301879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.687 [2024-11-20 09:17:09.301923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.302073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.302099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.302203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.302242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.302367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.302407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.302532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.302560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.302669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.302701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.302818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.302845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.302972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.303007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.303114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.303146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.303265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.303292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.303390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.303419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.303511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.303538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.303629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.303655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.303737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.303763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.303904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.303948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.304064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.304090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.304204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.304232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.304321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.304350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.304446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.304485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.304582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.304609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.304783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.304831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.304965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.305011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.305089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.305115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.305246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.305295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.305429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.305458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.305567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.305614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.305804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.305836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.305972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.306004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.306138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.306170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.306295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.306341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.306469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.306498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.306621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.306648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.688 qpair failed and we were unable to recover it. 00:26:32.688 [2024-11-20 09:17:09.306766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.688 [2024-11-20 09:17:09.306792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.306876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.306901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.307004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.307035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.307146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.307174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.307255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.307281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.307387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.307413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.307504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.307530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.307644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.307670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.307832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.307875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.308010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.308057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.308142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.308169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.308254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.308281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.308398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.308437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.308545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.308584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.308698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.308726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.308837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.308864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.308960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.308986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.309073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.309100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.309207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.309251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.309383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.309414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.309503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.309531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.309636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.309666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.309783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.309827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.309978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.310011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.310121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.310148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.310245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.310271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.310388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.310415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.310527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.310552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.310637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.310663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.310770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.310796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.310912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.310939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.311052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.311079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.311176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.311204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.311287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.311321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.311406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.311432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.311527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.311553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.311683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.311714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.311826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.689 [2024-11-20 09:17:09.311874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.689 qpair failed and we were unable to recover it. 00:26:32.689 [2024-11-20 09:17:09.312004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.312038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.312175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.312203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.312290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.312324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.312440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.312466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.312553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.312579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.312668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.312694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.312767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.312793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.312913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.312940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.313024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.313050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.313138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.313166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.313254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.313280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.313390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.313429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.313524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.313551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.313638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.313666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.313799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.313844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.313956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.314004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.314082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.314109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.314199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.314228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.314316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.314343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.314424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.314450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.314567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.314622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.314739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.314783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.314901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.314934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.315035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.315068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.315199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.315232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.315351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.315395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.315517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.315543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.315655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.315681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.315778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.315804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.315913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.315962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.316067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.316096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.316221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.316247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.316359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.316386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.316495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.316521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.316620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.316646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.316737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.316764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.316903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.316930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.317018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.690 [2024-11-20 09:17:09.317045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.690 qpair failed and we were unable to recover it. 00:26:32.690 [2024-11-20 09:17:09.317155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.317181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.317260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.317286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.317392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.317418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.317514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.317540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.317703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.317729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.317809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.317835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.317995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.318026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.318137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.318183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.318278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.318318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.318436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.318464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.318550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.318575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.318691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.318717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.318839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.318883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.319013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.319060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.319170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.319196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.319322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.319349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.319441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.319466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.319559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.319584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.319676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.319702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.319807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.319845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.319964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.319991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.320085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.320111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.320200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.320237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.320324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.320352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.320436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.320463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.320557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.320583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.320667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.320693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.320808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.320837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.320949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.320977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.321086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.321113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.321201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.321227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.321323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.321350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.321438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.321464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.321546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.321572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.321712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.321741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.321867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.321896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.322005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.322032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.691 [2024-11-20 09:17:09.322133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.691 [2024-11-20 09:17:09.322167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.691 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.322269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.322296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.322418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.322444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.322523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.322549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.322662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.322688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.322770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.322797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.322912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.322938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.323021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.323049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.323139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.323166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.323260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.323286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.323380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.323407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.323495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.323521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.323609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.323641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.323718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.323745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.323855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.323903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.323983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.324009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.324096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.324124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.324213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.324239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.324330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.324359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.324449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.324475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.324571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.324597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.324708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.324734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.324820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.324847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.324932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.324958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.325071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.325098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.325184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.325210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.325291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.325323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.325409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.325435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.325524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.325550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.325630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.325656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.325749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.325774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.325855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.325881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.325979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.326019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.326116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.326142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.692 qpair failed and we were unable to recover it. 00:26:32.692 [2024-11-20 09:17:09.326228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.692 [2024-11-20 09:17:09.326254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.326369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.326396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.326513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.326539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.326627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.326653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.326727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.326753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.326840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.326870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.326989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.327016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.327131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.327157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.327239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.327265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.327355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.327381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.327466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.327492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.327591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.327620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.327723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.327750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.327834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.327860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.327936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.327964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.328051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.328078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.328190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.328217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.328300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.328332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.328415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.328441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.328527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.328553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.328632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.328657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.328768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.328794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.328882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.328909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.328991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.329017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.329111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.329137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.329233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.329263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.329383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.329411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.329497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.329523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.329609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.329636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.329752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.329778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.329852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.329878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.329961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.329987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.330082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.330107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.330194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.330219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.330307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.330333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.330464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.330489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.330575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.330602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.330711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.693 [2024-11-20 09:17:09.330737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.693 qpair failed and we were unable to recover it. 00:26:32.693 [2024-11-20 09:17:09.330855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.330881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.330962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.330988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.331077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.331102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.331206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.331246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.331335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.331363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.331448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.331475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.331591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.331618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.331736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.331770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.331887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.331914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.332001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.332029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.332125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.332152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.332236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.332262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.332371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.332398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.332505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.332530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.332636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.332661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.332746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.332771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.332887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.332913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.332994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.333020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.333107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.333133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.333224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.333250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.333391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.333417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.333525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.333550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.333671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.333697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.333787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.333813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.333904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.333944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.334071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.334098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.334202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.334229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.334327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.334354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.334442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.334468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.334584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.334610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.334710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.334736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.334822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.334848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.334982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.335021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.335117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.335144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.335263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.335290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.335379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.335404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.335516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.335543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.335654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.694 [2024-11-20 09:17:09.335680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.694 qpair failed and we were unable to recover it. 00:26:32.694 [2024-11-20 09:17:09.335776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.335804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.335922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.335948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.336034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.336061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.336171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.336198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.336279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.336320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.336413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.336439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.336531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.336557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.336640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.336667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.336779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.336805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.336913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.336948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.337055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.337082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.337170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.337196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.337287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.337324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.337442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.337469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.337552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.337578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.337682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.337724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.337839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.337867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.337983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.338011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.338145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.338173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.338282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.338315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.338455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.338482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.338618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.338645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.338777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.338804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.338892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.338919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.339059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.339100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.339260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.339288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.339395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.339422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.339505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.339531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.339617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.339642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.339734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.339759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.339897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.339925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.340015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.340042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.340124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.340151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.340256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.340283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.340405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.340432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.340521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.340548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.340687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.340729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.340835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.695 [2024-11-20 09:17:09.340860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.695 qpair failed and we were unable to recover it. 00:26:32.695 [2024-11-20 09:17:09.340969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.340996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.341087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.341115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.341264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.341311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.341421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.341460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.341557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.341597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.341737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.341915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.341943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.342046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.342072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.342182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.342207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.342335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.342374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.342465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.342493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.342588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.342624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.342717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.342743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.342830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.342855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.342970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.342996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.343082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.343107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.343194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.343219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.343339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.343368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.343462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.343490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.343577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.343603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.343717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.343743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.343829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.343855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.343986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.344011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.344107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.344134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.344242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.344282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.344426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.344455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.344548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.344595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.344694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.344723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.344867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.344899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.345015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.345043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.345127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.345153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.345279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.345339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.345454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.345480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.345599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.345625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.345739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.345764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.345842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.345868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.345956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.345983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.696 [2024-11-20 09:17:09.346083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.696 [2024-11-20 09:17:09.346123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.696 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.346217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.346250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.346395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.346434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.346524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.346551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.346644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.346671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.346759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.346785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.346891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.346919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.347048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.347074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.347196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.347222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.347311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.347337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.347426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.347451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.347590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.347617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.347743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.347769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.347858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.347884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.347974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.348001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.348111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.348150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.348253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.348280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.348413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.348454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.348550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.348580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.348657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.348684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.348788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.348821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.349000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.349046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.349142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.349168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.349285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.349316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.349433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.349459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.349544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.349570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.349701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.349743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.349880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.349909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.350007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.350036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.350129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.350156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.350249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.350275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.350385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.350426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.350531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.350570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.697 [2024-11-20 09:17:09.350704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.697 [2024-11-20 09:17:09.350749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.697 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.350836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.350862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.350952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.350978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.351107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.351153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.351241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.351267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.351364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.351391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.351493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.351519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.351666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.351693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.351794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.351824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.351908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.351935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.352036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.352065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.352164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.352203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.352308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.352338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.352454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.352481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.352566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.352593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.352685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.352713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.352804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.352832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.352939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.352965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.353076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.353107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.353234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.353260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.353364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.353390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.353493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.353519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.353667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.353693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.353816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.353842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.353954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.353980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.354117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.354142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.354226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.354251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.354337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.354364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.354455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.354483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.354586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.354625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.354718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.354746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.354840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.354868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.354955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.354982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.355109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.355149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.355246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.355274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.355383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.698 [2024-11-20 09:17:09.355411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.698 qpair failed and we were unable to recover it. 00:26:32.698 [2024-11-20 09:17:09.355507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.355534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.355629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.355657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.355776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.355802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.355937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.355967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.356081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.356108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.356210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.356236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.356337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.356365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.356451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.356478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.356574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.356602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.356712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.356755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.356887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.356931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.357091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.357137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.357224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.357255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.357352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.357378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.357474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.357500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.357601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.357630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.357775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.357803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.357908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.357936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.358035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.358075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.358186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.358226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.358375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.358402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.358512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.358538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.358655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.358683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.358778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.358805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.358902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.358930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.359020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.359047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.359136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.359163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.359244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.359271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.359403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.359430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.359510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.359538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.359620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.359646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.699 [2024-11-20 09:17:09.359754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.699 [2024-11-20 09:17:09.359781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.699 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.359893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.359921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.360039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.360066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.360190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.360216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.360312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.360338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.360426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.360453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.360568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.360594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.360675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.360701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.360793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.360825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.360943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.360970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.361081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.361215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.361241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.361332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.361358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.361441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.361467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.361587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.361615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.361729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.361754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.361896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.361921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.362003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.362029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.362157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.362197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.362331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.362371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.362470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.362498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.362600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.362630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.362757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.362803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.362895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.362924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.363041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.363071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.363190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.363236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.363363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.363391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.363499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.363541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.363647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.363675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.363831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.363859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.364013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.364059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.364172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.364199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.364317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.364345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.364450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.700 [2024-11-20 09:17:09.364479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.700 qpair failed and we were unable to recover it. 00:26:32.700 [2024-11-20 09:17:09.364573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.364601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.364688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.364721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.364867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.364899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.365002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.365028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.365113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.365140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.365227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.365253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.365375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.365402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.365483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.365510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.365624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.365650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.365731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.365757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.365867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.365892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.365969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.365995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.366077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.366102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.366234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.366273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.366381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.366410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.366521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.366560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.366659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.366685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.366777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.366803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.366882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.366908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.366985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.367011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.367112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.367151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.367268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.367295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.367423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.367450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.367561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.367605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.367741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.367785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.367877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.367904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.367985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.368011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.368093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.368119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.368218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.368257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.368354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.368382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.368492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.368532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.368684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.368712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.368803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.368830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.368917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.701 [2024-11-20 09:17:09.368945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.701 qpair failed and we were unable to recover it. 00:26:32.701 [2024-11-20 09:17:09.369085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.369133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.369216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.369243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.369333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.369360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.369440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.369466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.369568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.369607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.369743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.369788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.369889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.369920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.370086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.370122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.370251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.370277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.370372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.370399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.370490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.370517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.370597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.370623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.370711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.370738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.370841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.370871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.370979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.371020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.371124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.371156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.371279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.371313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.371416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.371443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.371523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.371549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.371639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.371665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.371789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.371834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.371944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.371974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.372088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.372116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.372220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.372258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.372361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.372390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.372498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.372524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.372609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.372635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.702 qpair failed and we were unable to recover it. 00:26:32.702 [2024-11-20 09:17:09.372710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.702 [2024-11-20 09:17:09.372735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.372825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.372851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.372935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.372961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.373042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.373069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.373150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.373179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.373274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.373312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.373409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.373449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.373545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.373579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.373665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.373692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.373781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.373808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.373924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.373969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.374059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.374087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.374196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.374236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.374325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.374354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.374468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.374495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.374603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.374633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.374740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.374772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.374924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.374968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.375097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.375144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.375285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.375316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.375414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.375440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.375572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.375617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.375716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.375759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.375848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.375873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.375981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.376007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.376094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.376121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.376204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.376230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.376345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.376372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.376480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.376510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.376616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.376648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.376806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.376838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.376934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.703 [2024-11-20 09:17:09.376961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.703 qpair failed and we were unable to recover it. 00:26:32.703 [2024-11-20 09:17:09.377062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.377092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.377223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.377249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.377337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.377368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.377452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.377478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.377603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.377647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.377754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.377782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.377895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.377920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.378022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.378050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.378127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.378154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.378241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.378270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.378373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.378400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.378481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.378507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.378595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.378621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.378745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.378773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.378873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.378902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.379027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.379052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.379139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.379165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.379251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.379278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.379380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.379409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.379488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.379515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.379660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.379698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.379791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.379817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.379931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.379957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.380032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.380058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.380148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.380173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.380251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.380276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.380374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.380401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.380484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.380510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.380597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.380622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.380704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.380729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.380808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.380834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.380929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.380955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.381037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.381062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.381145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.381171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.704 qpair failed and we were unable to recover it. 00:26:32.704 [2024-11-20 09:17:09.381315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.704 [2024-11-20 09:17:09.381341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.381454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.381481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.381603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.381628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.381704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.381729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.381815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.381841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.381932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.381958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.382067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.382093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.382197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.382237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.382364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.382395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.382508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.382547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.382638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.382666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.382782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.382809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.382933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.382959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.383069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.383096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.383212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.383238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.383325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.383354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.383465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.383491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.383581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.383610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.383701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.383728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.383810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.383837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.383941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.383973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.384111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.384138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.384228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.384256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.384357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.384384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.384463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.384490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.384585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.384628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.384818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.384847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.384952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.384984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.385170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.385199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.385323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.385362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.385481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.385509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.385641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.385685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.385783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.385814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.385920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.385947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.386090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.386118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.386218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.386263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.386372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.386401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.386494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.705 [2024-11-20 09:17:09.386521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.705 qpair failed and we were unable to recover it. 00:26:32.705 [2024-11-20 09:17:09.386656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.386700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.386827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.386856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.387010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.387036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.387119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.387144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.387271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.387318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.387414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.387443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.387530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.387556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.387688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.387718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.387819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.387849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.388003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.388035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.388166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.388198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.388332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.388371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.388481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.388520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.388663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.388694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.388798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.388824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.388904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.388929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.389040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.389069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.389187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.389213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.389311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.389337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.389424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.389450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.389547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.389574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.389686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.389711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.389830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.389856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.389975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.390004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.390095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.390128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.390268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.390316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.390441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.390469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.390562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.390589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.390677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.390704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.390786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.390813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.390933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.390960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.391045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.391072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.391158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.391183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.391286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.391333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.391440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.391480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.391598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.391626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.391736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.391768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.391880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.706 [2024-11-20 09:17:09.391907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.706 qpair failed and we were unable to recover it. 00:26:32.706 [2024-11-20 09:17:09.392065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.392097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.392226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.392253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.392330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.392359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.392473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.392500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.392633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.392663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.392791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.392822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.392944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.392977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.393137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.393169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.393308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.393335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.393423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.393449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.393538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.393564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.393691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.393720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.393818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.393843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.393955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.393986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.394115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.394154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.394285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.394332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.394426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.394455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.394548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.394575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.394731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.394760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.394875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.394904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.395006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.395052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.395197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.395236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.395335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.395363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.395458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.395486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.395568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.395595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.395672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.395697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.395787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.395813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.395913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.395939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.707 qpair failed and we were unable to recover it. 00:26:32.707 [2024-11-20 09:17:09.396025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.707 [2024-11-20 09:17:09.396050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.396147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.396176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.396264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.396291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.396447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.396486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.396580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.396608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.396689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.396716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.396827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.396853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.396987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.397032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.397154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.397182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.397291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.397326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.397407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.397434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.397525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.397551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.397706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.397752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.397878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.397909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.398032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.398064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.398184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.398225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.398351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.398380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.398474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.398501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.398616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.398642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.398748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.398779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.398904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.398950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.399108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.399155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.399269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.399297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.399406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.399434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.399524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.399550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.399684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.399735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.399849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.399875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.399967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.399995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.400119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.400145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.400259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.400299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.400426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.400454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.400544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.400570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.400664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.400690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.400805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.400831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.400931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.400971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.401091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.401119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.401214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.401242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.401358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.401384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.708 qpair failed and we were unable to recover it. 00:26:32.708 [2024-11-20 09:17:09.401470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.708 [2024-11-20 09:17:09.401497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.401593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.401618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.401697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.401723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.401831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.401857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.401969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.401995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.402107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.402135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.402214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.402244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.402343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.402372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.402453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.402480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.402570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.402596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.402703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.402729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.402839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.402870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.402978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.403005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.403083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.403109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.403185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.403216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.403336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.403363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.403443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.403470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.403563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.403590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.403704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.403732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.403826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.403853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.403962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.403988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.404070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.404096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.404204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.404244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.404371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.404399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.404486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.404513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.404627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.404674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.404777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.404820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.404963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.405009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.405144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.405174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.405325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.405365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.405465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.405495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.405629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.405661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.405785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.405814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.405924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.405972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.406113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.406139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.406250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.406277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.406394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.406421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.406504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.709 [2024-11-20 09:17:09.406531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.709 qpair failed and we were unable to recover it. 00:26:32.709 [2024-11-20 09:17:09.406643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.406668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.406796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.406841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.406923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.406949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.407047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.407076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.407203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.407229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.407324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.407352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.407437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.407463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.407552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.407593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.407708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.407754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.407869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.407915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.408075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.408102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.408214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.408239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.408359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.408385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.408470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.408495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.408580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.408606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.408717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.408745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.408864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.408899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.408991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.409017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.409132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.409158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.409252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.409279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.409414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.409453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.409538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.409565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.409653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.409680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.409813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.409859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.409994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.410039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.410155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.410184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.410312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.410341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.410439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.410469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.410556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.410582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.410708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.410739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.410901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.410946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.411084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.411110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.411214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.411253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.411352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.411381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.411466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.411493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.411592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.411624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.710 [2024-11-20 09:17:09.411753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.710 [2024-11-20 09:17:09.411785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.710 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.411921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.411955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.412073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.412101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.412222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.412251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.412350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.412377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.412490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.412517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.412667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.412699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.412855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.412888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.412995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.413022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.413110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.413137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.413250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.413276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.413370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.413403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.413513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.413541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.413685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.413728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.413837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.413880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.413976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.414001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.414084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.414112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.414212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.414251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.414382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.414412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.414499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.414526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.414610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.414637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.414752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.414780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.414884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.414916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.415007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.415038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.415160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.415200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.415327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.415355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.415467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.415493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.415585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.415612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.415710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.415741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.415869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.415915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.416024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.416050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.416160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.416200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.416311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.416341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.416431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.416458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.416559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.416586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.416673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.416699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.416784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.416811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.416916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.416948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.711 qpair failed and we were unable to recover it. 00:26:32.711 [2024-11-20 09:17:09.417104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.711 [2024-11-20 09:17:09.417130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.417248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.417274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.417395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.417421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.417503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.417529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.417612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.417637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.417770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.417801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.417940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.417986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.418069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.418094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.418205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.418230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.418327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.418358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.418460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.418500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.418599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.418628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.418745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.418772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.418860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.418888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.418976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.419003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.419111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.419150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.419238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.419264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.419375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.419406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.419548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.419574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.419660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.419687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.419791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.419823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.419931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.419982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.420065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.420094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.420235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.420261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.420378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.420405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.420492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.420518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.420608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.420635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.420722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.420748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.420839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.420865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.420976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.421002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.421092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.421118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.421235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.421260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.421355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.421381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.421493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.712 [2024-11-20 09:17:09.421519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.712 qpair failed and we were unable to recover it. 00:26:32.712 [2024-11-20 09:17:09.421608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.421633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.421715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.421741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.421848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.421892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.421989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.422017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.422126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.422166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.422286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.422320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.422412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.422438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.422531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.422557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.422635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.422661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.422757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.422783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.422868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.422896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.422997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.423036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.423163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.423202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.423298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.423331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.423441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.423466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.423555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.423581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.423660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.423685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.423766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.423794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.423897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.423927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.424070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.424097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.424190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.424218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.424332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.424360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.424461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.424500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.424642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.424690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.424827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.424871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.424976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.425005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.425111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.425138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.425230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.425255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.425349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.425377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.425477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.425509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.425621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.425648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.425788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.425814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.425903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.425933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.426029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.426068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.426198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.426224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.426335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.426362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.426447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.713 [2024-11-20 09:17:09.426473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.713 qpair failed and we were unable to recover it. 00:26:32.713 [2024-11-20 09:17:09.426558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.426584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.426667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.426693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.426806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.426835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.426932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.426960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.427051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.427081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.427169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.427196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.427284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.427317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.427433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.427459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.427574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.427599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.427717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.427743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.427840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.427868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.427949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.427976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.428056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.428082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.428171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.428197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.428278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.428312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.428429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.428456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.428538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.428565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.428654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.428681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.428762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.428790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.428949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.428995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.429143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.429187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.429326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.429372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.429463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.429491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.429600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.429629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.429775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.429808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.429930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.429984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.430094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.430120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.430238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.430263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.430369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.430395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.430487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.430519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.430625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.430650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.430758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.430804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.430891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.430919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.431040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.431066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.431156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.714 [2024-11-20 09:17:09.431182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.714 qpair failed and we were unable to recover it. 00:26:32.714 [2024-11-20 09:17:09.431269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.431296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.431389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.431416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.431508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.431534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.431651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.431676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.431790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.431816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.431909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.431938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.432031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.432059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.432173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.432198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.432280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.432313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.432405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.432431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.432528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.432557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.432681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.432708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.432795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.432821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.432896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.432922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.433007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.433033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.433140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.433167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.433275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.433323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.433422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.433449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.433567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.433596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.433680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.433706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.433819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.433846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.433930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.433958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.434096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.434142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.434257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.434283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.434377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.434408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.434490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.434517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.434624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.434670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.434756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.434781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.434977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.435003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.435094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.435120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.435206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.435232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.435354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.435382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.435476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.435515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.435620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.435659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.435748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.435775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.435868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.435894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.715 [2024-11-20 09:17:09.435999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.715 [2024-11-20 09:17:09.436029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.715 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.436127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.436155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.436313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.436340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.436426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.436453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.436537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.436564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.436688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.436714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.436821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.436852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.436955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.436999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.437110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.437136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.437209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.437235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.437318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.437345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.437453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.437479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.437588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.437627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.437729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.437758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.437852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.437879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.437969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.438001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.438116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.438142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.438259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.438286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.438400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.438427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.438538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.438565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.438655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.438682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.438761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.438787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.438883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.438910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.439021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.439052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.439166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.439193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.439274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.439300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.439392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.439418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.439499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.439525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.439637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.439668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.439800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.439826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.439946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.439975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.440073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.440113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.716 qpair failed and we were unable to recover it. 00:26:32.716 [2024-11-20 09:17:09.440215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.716 [2024-11-20 09:17:09.440255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.440384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.440411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.440509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.440536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.440643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.440668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.440777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.440803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.440916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.440943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.441059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.441089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.441204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.441231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.441349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.441377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.441462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.441489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.441590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.441629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.441749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.441793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.441911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.441938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.442016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.442042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.442129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.442156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.442237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.442263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.442388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.442417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.442532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.442558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.442648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.442675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.442786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.442819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.442955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.442987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.443113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.443146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.443248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.443282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.443414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.443447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.443535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.443562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.443648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.443675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.443785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.443811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.443914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.443945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.444048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.444080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.444178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.444210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.444337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.444376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.444474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.444502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.444596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.444623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.444732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.444758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.444890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.444937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.445043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.445089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.717 qpair failed and we were unable to recover it. 00:26:32.717 [2024-11-20 09:17:09.445176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.717 [2024-11-20 09:17:09.445204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.445328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.445368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.445466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.445495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.445582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.445610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.445704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.445731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.445824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.445850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.445969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.446016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.446119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.446151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.446286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.446318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.446430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.446456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.446535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.446561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.446704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.446735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.446833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.446865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.446994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.447026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.447167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.447233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.447334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.447363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.447445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.447472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.447549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.447574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.447679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.447709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.447829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.447859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.447972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.448003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.448126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.448152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.448264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.448290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.448394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.448420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.448508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.448534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.448639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.448665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.448745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.448772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.448851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.448877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.448991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.449017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.449111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.449140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.449235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.449274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.449427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.449467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.449578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.449606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.449700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.449729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.449817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.449852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.718 qpair failed and we were unable to recover it. 00:26:32.718 [2024-11-20 09:17:09.449970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.718 [2024-11-20 09:17:09.450018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.450132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.450157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.450269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.450295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.450391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.450417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.450496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.450524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.450630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.450656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.450746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.450777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.450886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.450918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.451035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.451077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.451181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.451213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.451372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.451400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.451484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.451511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.451636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.451664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.451753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.451781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.451979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.452012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.452119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.452150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.452259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.452286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.452380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.452407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.452506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.452547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.452670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.452705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.452831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.452877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.452983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.453014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.453169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.453229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.453326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.453366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.453456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.453483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.453573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.453599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.453711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.453737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.453821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.453849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.453943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.453987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.454116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.454155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.454250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.454279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.454392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.454421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.454508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.454535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.454625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.454669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.454782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.454824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.454982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.455014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.455150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.455193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.455315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.719 [2024-11-20 09:17:09.455348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.719 qpair failed and we were unable to recover it. 00:26:32.719 [2024-11-20 09:17:09.455431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.455458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.455541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.455568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.455692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.455734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.455836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.455868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.456002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.456033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.456157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.456197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.456292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.456331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.456442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.456481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.456575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.456620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.456716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.456743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.456848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.456894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.457033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.457059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.457136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.457162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.457289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.457345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.457474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.457504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.457601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.457628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.457739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.457765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.457866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.457899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.458034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.458081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.458186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.458212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.458287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.458324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.458411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.458437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.458528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.458555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.458700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.458732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.458846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.458893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.458996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.459029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.459121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.459154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.459285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.459320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.459403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.459429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.459511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.459537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.459623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.459649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.459734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.459762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.459862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.459894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.460005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.460048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.460160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.460193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.720 qpair failed and we were unable to recover it. 00:26:32.720 [2024-11-20 09:17:09.460325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.720 [2024-11-20 09:17:09.460375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.460457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.460483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.460568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.460595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.460678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.460704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.460797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.460824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.460939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.460973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.461074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.461107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.461250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.461276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.461385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.461414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.461489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.461516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.461629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.461656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.461741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.461768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.461860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.461892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.462048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.462101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.462215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.462242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.462328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.462353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.462443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.462469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.462608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.462636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.462726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.462753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.462865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.462892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.462982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.463009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.463110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.463149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.463243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.463270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.463380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.463409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.463498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.463524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.463616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.463643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.463754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.463781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.463933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.463966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.464075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.464122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.464268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.464295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.464399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.464425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.721 [2024-11-20 09:17:09.464512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.721 [2024-11-20 09:17:09.464539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.721 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.464652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.464677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.464812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.464857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.464965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.464995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.465130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.465157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.465247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.465275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.465378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.465405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.465491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.465518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.465605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.465631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.465716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.465742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.465830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.465856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.465941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.465984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.466175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.466206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.466321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.466358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.466449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.466478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.466566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.466592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.466679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.466705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.466807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.466838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.466960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.467004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.467126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.467166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.467283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.467318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.467433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.467460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.467545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.467576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.467665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.467692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.467806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.467833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.467943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.467968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.468069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.468108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.468259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.468287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.468394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.468423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.468547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.468573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.468685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.468711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.468824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.468850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.468933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.468960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.469050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.469075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.469207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.469247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.469402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.469430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.722 qpair failed and we were unable to recover it. 00:26:32.722 [2024-11-20 09:17:09.469527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.722 [2024-11-20 09:17:09.469555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.469640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.469666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.469770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.469804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.469915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.469943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.470060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.470088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.470177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.470205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.470322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.470349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.470436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.470463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.470637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.470669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.470766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.470798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.470928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.470961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.471094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.471126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.471231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.471262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.471422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.471451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.471550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.471578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.471664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.471691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.471789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.471819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.471967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.472012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.472128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.472155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.472257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.472285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.472383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.472412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.472514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.472541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.472650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.472682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.472787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.472819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.472964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.472996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.473099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.473131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.473273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.723 [2024-11-20 09:17:09.473338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.723 qpair failed and we were unable to recover it. 00:26:32.723 [2024-11-20 09:17:09.473501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.473528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.473628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.473661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.473764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.473792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.473916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.473961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.474123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.474167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.474255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.474282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.474426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.474471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.474607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.474649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.474729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.474755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.474845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.474870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.474960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.474987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.475074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.475100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.475187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.475215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.475313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.475340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.475438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.475477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.475602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.475630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.475722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.475750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.475845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.475876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.475983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.476015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.476134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.476161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.476251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.476278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.476388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.476415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.476522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.476554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.476658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.476692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.476796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.476829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.476937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.476965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.477101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.477134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.724 qpair failed and we were unable to recover it. 00:26:32.724 [2024-11-20 09:17:09.477263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.724 [2024-11-20 09:17:09.477295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.477439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.477466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.477577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.477610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.477741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.477773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.477910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.477943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.478058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.478105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.478212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.478237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.478351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.478377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.478537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.478584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.478741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.478773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.478902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.478953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.479096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.479131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.479285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.479337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.479475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.479507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.479636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.479667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.479782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.479816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.479930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.479957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.480095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.480142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.480239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.480265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.480385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.480431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.480605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.480652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.480747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.480779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.480937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.480971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.481092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.481127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.481237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.481271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.481420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.481447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.725 [2024-11-20 09:17:09.481572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.725 [2024-11-20 09:17:09.481605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.725 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.481731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.481765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.481861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.481894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.482000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.482033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.482142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.482177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.482322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.482349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.482466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.482492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.482602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.482633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.482819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.482852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.482965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.482996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.483124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.483155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.483294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.483330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.483412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.483438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.483612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.483658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.483779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.483807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.483926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.483958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.484054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.484085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.484210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.484250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.484383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.484411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.484502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.484529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.484667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.484714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.484827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.484853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.484980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.485006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.485093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.485119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.485212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.485239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.485323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.485351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.485444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.485478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.726 [2024-11-20 09:17:09.485591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.726 [2024-11-20 09:17:09.485631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.726 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.485755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.485788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.485892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.485925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.486024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.486056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.486161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.486187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.486295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.486341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.486445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.486483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.486634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.486667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.486791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.486823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.486923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.486950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.487063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.487089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.487203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.487229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.487336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.487376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.487502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.487550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.487662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.487707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.487815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.487841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.487941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.487967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.488062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.488091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.488175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.488202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.488294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.488332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.488425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.488452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.488537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.488564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.488658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.488685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.488779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.488809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.727 [2024-11-20 09:17:09.488946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.727 [2024-11-20 09:17:09.488977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.727 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.489090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.489117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.489209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.489241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.489328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.489355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.489462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.489507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.489646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.489675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.489821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.489866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.489982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.490008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.490107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.490133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.490226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.490252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.490347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.490373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.490499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.490525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.490606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.490633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.490724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.490750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.490869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.490894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.491012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.491038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.491156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.491182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.491273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.491298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.491387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.491413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.491524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.491550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.491674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.491699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.491821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.491847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.491963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.491989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.492081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.492107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.492206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.492246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.492358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.492388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.492518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.492556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.492683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.492714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.492840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.492871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.493005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.493035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.493144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.493172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.493273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.493320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.493446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.493474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.493574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.493604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.493706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.493735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.493854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.728 [2024-11-20 09:17:09.493884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.728 qpair failed and we were unable to recover it. 00:26:32.728 [2024-11-20 09:17:09.493970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.493999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.494094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.494125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.494236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.494279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.494429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.494459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.494577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.494606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.494743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.494785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.494909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.494942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.495077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.495103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.495188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.495214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.495323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.495362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.495456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.495484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.495579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.495605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.495740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.495767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.495862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.495890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.496016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.496044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.496154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.496179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.496285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.496336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.496432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.496459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.496557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.496585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.496674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.496701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.496822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.496850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.496967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.496995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.497078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.497105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.497227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.497254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.497374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.497400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.497486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.497529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.497617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.497645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.497744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.497773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.497895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.497926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.498019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.498047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.498162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.498190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.498314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.498341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.498428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.498454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.498581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.498613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.498719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.498745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.498829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.498856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.498974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.498999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.499092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.499118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.499244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.499283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.499382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.499410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.729 [2024-11-20 09:17:09.499523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.729 [2024-11-20 09:17:09.499550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.729 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.499669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.499696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.499786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.499813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.499910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.499937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.500023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.500049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.500136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.500163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.500248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.500274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.500369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.500395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.500532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.500558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.500672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.500698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.500809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.500835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.500914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.500940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.501050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.501078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.501165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.501193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.501277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.501311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.501400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.501427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.501507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.501550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.501664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.501693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.501834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.501861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.501943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.501971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.502063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.502091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.502211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.502238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.502373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.502413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.502534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.502560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.502644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.502670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.502806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.502833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.502915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.502941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.503023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.503049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.503167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.503194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.503325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.503364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.503481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.503510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.503600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.503628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.503710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.503736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.503850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.503883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.503970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.503996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.504081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.504108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.504203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.504242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.504347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.504376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.504487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.504514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.504630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.504657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.504741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.504768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.504878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.504904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.504997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.505025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.505106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.505132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.505216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.505242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.505354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.505382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.505468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.730 [2024-11-20 09:17:09.505494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.730 qpair failed and we were unable to recover it. 00:26:32.730 [2024-11-20 09:17:09.505573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.505599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.505688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.505714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.505799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.505827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.505928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.505956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.506048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.506076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.506166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.506192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.506275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.506312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.506395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.506422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.506511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.506537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.506629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.506656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.506795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.506822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.506900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.506927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.507008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.507034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.507162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.507202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.507316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.507345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.507436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.507463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.507581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.507606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.507743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.507769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.507881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.507907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.508021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.508047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.508171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.508211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.508331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.508359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.508446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.508473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.508556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.508582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.508659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.508685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.508821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.508847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.508934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.508965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.509098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.509137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.509227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.509256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.509356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.509383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.509467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.509493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.509580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.509605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.509712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.509738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.509853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.509878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.509963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.509989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.510079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.510106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.510222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.510248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.510334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.510361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.510473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.510499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.510586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.510612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.510698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.510724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.510833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.510859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.731 qpair failed and we were unable to recover it. 00:26:32.731 [2024-11-20 09:17:09.510933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.731 [2024-11-20 09:17:09.510959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.511065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.511104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.511236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.511275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.511377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.511406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.511493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.511519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.511629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.511655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.511745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.511771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.511887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.511915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.512004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.512031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.512139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.512165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.512249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.512274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.512390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.512429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.512559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.512587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.512731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.512757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.512838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.512864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.512951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.512977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.513090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.513116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.513197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.513222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.513338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.513364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.513494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.513520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.513604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.513631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.513719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.513745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.513860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.513886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.513974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.514000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.514119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.514145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.514265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.514291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.514391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.514419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.514518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.514557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.514651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.514678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.514774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.514802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.514917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.514944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.515032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.515058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.515177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.732 [2024-11-20 09:17:09.515203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.732 qpair failed and we were unable to recover it. 00:26:32.732 [2024-11-20 09:17:09.515289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.515323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.515416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.515442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.515528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.515554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.515636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.515661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.515748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.515774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.515885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.515915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.516038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.516064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.516166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.516205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.516328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.516366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.516455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.516481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.516571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.516597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.516684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.516709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.516794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.516820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.516956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.516982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.517078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.517117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.517217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.517255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.517363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.517392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.517479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.517506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.517590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.517616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.517736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.517761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.517856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.517883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.518003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.518032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.518143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.518183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.518271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.518298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.518397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.518423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.518506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.518532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.518619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.518646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.518759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.518787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.518881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.518907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.518994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.519020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.519131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.519157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.519283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.519330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.519452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.519481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.519601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.519628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.519718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.519744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.519834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.519860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.519950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.519976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.733 [2024-11-20 09:17:09.520064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.733 [2024-11-20 09:17:09.520089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.733 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.520180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.520206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.520282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.520316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.520410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.520436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.520543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.520569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.520656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.520681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.520761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.520787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.520927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.520952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.521039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.521072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.521198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.521237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.521349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.521389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.521486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.521514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.521601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.521627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.521722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.521750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.521845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.521871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.521987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.522014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.522119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.522159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.522255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.522283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.522411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.522438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.522528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.522554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.522642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.522669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.522776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.522801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.522920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.522947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.523037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.523064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.523173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.523199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.523284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.523317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.523419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.523448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.523565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.523592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.523674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.523701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.523789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.523817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.523910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.523937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.524027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.524054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.524153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.524192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.524319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.524348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.524464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.524490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.524582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.524609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.524698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.524724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.734 qpair failed and we were unable to recover it. 00:26:32.734 [2024-11-20 09:17:09.524839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.734 [2024-11-20 09:17:09.524866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.524951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.524979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.525064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.525090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.525193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.525219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.525320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.525348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.525474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.525513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.525642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.525670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.525763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.525789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.525872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.525899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.525982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.526009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.526106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.526134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.526223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.526250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.526346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.526373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.526463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.526489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.526601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.526627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.526717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.526743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.526824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.526850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.526941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.526968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.527081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.527110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.527226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.527253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.527355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.527395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.527530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.527557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.527651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.527677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.527766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.527792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.527875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.527902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.527985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.528014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.528103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.528131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.528224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.528249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.528363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.528390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.528480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.528506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.528583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.528608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.528691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.528717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.528811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.528838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.528931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.528958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.529043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.529070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.529166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.529192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.529312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.529339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.529445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.735 [2024-11-20 09:17:09.529472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.735 qpair failed and we were unable to recover it. 00:26:32.735 [2024-11-20 09:17:09.529555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.529587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.529667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.529693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.529783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.529811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.529903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.529929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.530064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.530103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.530192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.530218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.530298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.530332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.530411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.530437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.530518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.530543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.530625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.530651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.530767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.530792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.530910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.530937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.531053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.531080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.531167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.531193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.531294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.531327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.531412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.531438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.531530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.531556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.531669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.531695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.531808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.531836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.531926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.531953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.532062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.532102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.532206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.532233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.532325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.532352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.532450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.532477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.532592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.532618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.532706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.532731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.532814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.532842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.532925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.532956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.533060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.533100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.533195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.533224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.533310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.533338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.533448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.533474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.736 qpair failed and we were unable to recover it. 00:26:32.736 [2024-11-20 09:17:09.533563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.736 [2024-11-20 09:17:09.533590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.533708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.533734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.533846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.533872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.533988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.534013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.534116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.534156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.534253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.534291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.534445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.534472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.534560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.534587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.534673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.534699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.534792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.534820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.534908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.534935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.535026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.535057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.535170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.535198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.535317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.535345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.535460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.535486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.535572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.535599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.535767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.535801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.535940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.535973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.536072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.536145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.536313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.536342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.536463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.536489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.536621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.536647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.536786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.536821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.536977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.537006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.537099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.537126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.537211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.537236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.537353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.537380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.537463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.537491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.537605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.537631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.537716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.537743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.537840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.537866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.537955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.537982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.538072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.538098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.538183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.538211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.538322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.538361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.538465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.538498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.538613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.538640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.538748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.737 [2024-11-20 09:17:09.538773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.737 qpair failed and we were unable to recover it. 00:26:32.737 [2024-11-20 09:17:09.538854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.538880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.538971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.539000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.539154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.539182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.539292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.539337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.539460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.539487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.539574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.539601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.539683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.539708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.539792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.539817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.539898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.539923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.540010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.540035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.540117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.540142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.540239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.540267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.540368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.540397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.540487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.540513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.540626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.540653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.540742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.540769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.540867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.540894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.540979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.541006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.541094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.541120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.541215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.541241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.541335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.541363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.541475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.541514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.541616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.541654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.541773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.541802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.541921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.541953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.542044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.542070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.542159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.542185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.542308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.542336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.542432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.542461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.542549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.542576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.542664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.542692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.542778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.542804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.542892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.542919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.543008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.543035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.543128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.543155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.738 [2024-11-20 09:17:09.543250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.738 [2024-11-20 09:17:09.543280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.738 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.543381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.543419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.543509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.543536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.543659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.543685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.543778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.543804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.543918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.543943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.544033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.544061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.544145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.544173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.544264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.544292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.544423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.544449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.544564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.544590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.544677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.544703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.544786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.544812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.544942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.544968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.545089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.545116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.545229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.545256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.545369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.545408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.545508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.545536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.545630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.545656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.545740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.545766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.545849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.545876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.545970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.545998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.546107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.546135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.546215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.546242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.546319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.546347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.546428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.546454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.546586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.546612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.546699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.546726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.546813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.546839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.546926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.546958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.547058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.547097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.547189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.547217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.547307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.547334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.547421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.547447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.547555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.547581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.547689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.547715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.547848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.739 [2024-11-20 09:17:09.547875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.739 qpair failed and we were unable to recover it. 00:26:32.739 [2024-11-20 09:17:09.548031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.548057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.548173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.548200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.548340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.548367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.548457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.548484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.548597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.548625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.548718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.548744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.548872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.548898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.549012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.549039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.549119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.549144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.549238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.549265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.549389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.549415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.549497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.549523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.549634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.549659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.549742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.549767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.549890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.549917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.550028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.550055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.550173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.550211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.550313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.550339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.550426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.550452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.550539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.550573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.550687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.550713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.550816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.550841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.550917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.550942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.551059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.551088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.551228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.551267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.551393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.551422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.551512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.551539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.551619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.551645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.551724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.551750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.551838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.551866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.551956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.551983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.552085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.552125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.552216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.740 [2024-11-20 09:17:09.552244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.740 qpair failed and we were unable to recover it. 00:26:32.740 [2024-11-20 09:17:09.552345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.552372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.552496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.552524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.552642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.552671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.552761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.552788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.552885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.552911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.552995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.553023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.553138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.553165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.553263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.553309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.553406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.553433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.553546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.553572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.553661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.553689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.553784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.553812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.553911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.553950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.554109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.554149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.554262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.554301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.554402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.554429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.554517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.554543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.554623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.554648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.554744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.554770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.554861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.554886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.554993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.555019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.555154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.555183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.555272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.555299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.555398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.555424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.555515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.555542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.741 [2024-11-20 09:17:09.555622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.741 [2024-11-20 09:17:09.555648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.741 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.555732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.555763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.555881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.555907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.556035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.556074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.556166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.556193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.556282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.556314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.556398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.556424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.556534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.556560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.556646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.556672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.556782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.556808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.556905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.556931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.557014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.557040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.557116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.557142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.557240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.557280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.557387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.557416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.557506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.557534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.557656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.557684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.557777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.557805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.557903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.557932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.558026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.558052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.558138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.558165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.558279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.558310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.558404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.558430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.558523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.558562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.558703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.558731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.558822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.558849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.558963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.558990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.559071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.559097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.559203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.559249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.559381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.559410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.559494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.559520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.559600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.559626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.742 [2024-11-20 09:17:09.559732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.742 [2024-11-20 09:17:09.559757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.742 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.559847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.559873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.559962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.559987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.560069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.560095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.560178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.560203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.560346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.560373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.560460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.560486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.560582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.560612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.560751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.560778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.560895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.560922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.561016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.561043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.561154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.561179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.561279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.561330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.561428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.561456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.561544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.561571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.561652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.561679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.561771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.561799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.561957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.561996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.562089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.562116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.562210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.562238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.562336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.562363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.562476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.562502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.562611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.562637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.562737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.562769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.562861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.562888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.562984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.563014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.563102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.563130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.563247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.563274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.563402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.563429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.563530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.563569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.563665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.563693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.563810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.563836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.563928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.563954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.564036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.743 [2024-11-20 09:17:09.564062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.743 qpair failed and we were unable to recover it. 00:26:32.743 [2024-11-20 09:17:09.564156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.564184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.564275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.564300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.564424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.564450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.564530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.564557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.564645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.564671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.564784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.564811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.564900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.564926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.565025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.565065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.565155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.565182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.565313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.565342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.565434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.565460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.565550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.565578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.565666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.565693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.565809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.565836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.565923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.565949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.566063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.566088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.566179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.566205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.566294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.566326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.566433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.566459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.566544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.566571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.566658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.566686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.566814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.566840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.566955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.566984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.567090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.567130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.567230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.567259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.567348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.567377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.567464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.567492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.567578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.567624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.567798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.567844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.568004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.568037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.568159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.568186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.568272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.568299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.744 [2024-11-20 09:17:09.568423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.744 [2024-11-20 09:17:09.568457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.744 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.568605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.568631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.568715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.568742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.568858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.568884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.568975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.569001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.569082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.569108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.569192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.569219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.569299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.569332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.569435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.569473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.569565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.569593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.569677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.569704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.569801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.569828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.569913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.569939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.570027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.570055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.570143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.570170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.570277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.570308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.570399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.570425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.570511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.570536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.570727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.570752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.570866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.570891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.570975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.571001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.571114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.571140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.571240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.571279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.571387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.571415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.571562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.571591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.571677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.571705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.571791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.571818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.571955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.571982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.572064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.572091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.572191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.572230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.572341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.572369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.572459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.572485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.572628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.572655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.572742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.572768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.572855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.572881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.745 qpair failed and we were unable to recover it. 00:26:32.745 [2024-11-20 09:17:09.572971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.745 [2024-11-20 09:17:09.572998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.573092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.573131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.573220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.573253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.573377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.573405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.573496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.573522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.573654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.573680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.573769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.573795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.573878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.573904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.573994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.574022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.574118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.574144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.574262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.574288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.574380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.574407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.574497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.574523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.574616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.574643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.574734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.574760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.574844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.574873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.574961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.574987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.575068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.575094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.575180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.575206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.575316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.575345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.575424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.575449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.575539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.575566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.575650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.575676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.575787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.575813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.575925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.575951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.576034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.576060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.576179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.576205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.576293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.576327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.576537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.576576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.746 [2024-11-20 09:17:09.576663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.746 [2024-11-20 09:17:09.576696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.746 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.576787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.576814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.576894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.576920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.577029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.577054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.577133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.577159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.577239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.577265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.577396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.577435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.577552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.577580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.577693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.577719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.577804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.577831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.577915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.577941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.578050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.578076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.578162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.578188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.578286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.578332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.578431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.578459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.578546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.578574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.578664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.578690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.578827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.578852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.578935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.578962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.579075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.579101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.579204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.579243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.579339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.579368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.579460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.579489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.579586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.579612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.579703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.579729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.579807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.579833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.579913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.579939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.580036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.580064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.580143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.580171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.580290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.580323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.580418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.580457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.580547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.580574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.580654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.747 [2024-11-20 09:17:09.580680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.747 qpair failed and we were unable to recover it. 00:26:32.747 [2024-11-20 09:17:09.580765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.580791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.580897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.580923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.581009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.581037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.581120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.581146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.581234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.581261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.581353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.581380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.581466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.581494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.581584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.581618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.581707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.581734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.581872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.581899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.581981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.582007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.582089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.582115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.582210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.582236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.582318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.582344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.582431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.582458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.582538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.582564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.582676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.582701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.582807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.582833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.583033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.583061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.583142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.583168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.583254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.583281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.583415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.583442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.583525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.583551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.583670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.583695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.583781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.583808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.583901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.583927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.584013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.584039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.584127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.584152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.748 [2024-11-20 09:17:09.584260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.748 [2024-11-20 09:17:09.584286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.748 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.584420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.584459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.584548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.584575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.584655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.584682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.584770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.584795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.584878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.584905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.585004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.585030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.585144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.585171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.585263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.585288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.585407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.585434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.585513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.585539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.585658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.585684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.585772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.585798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.585879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.585904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.585997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.586023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.586136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.586162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.586355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.586383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.586477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.586503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.586587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.586614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.586699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.586730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.586843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.586870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.586952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.586978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.587079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.587105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.587225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.587251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.587335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.587362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.587469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.587494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.587576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.587603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.587690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.587716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.587817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.587845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.587939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.587965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.588077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.588102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.588215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.588241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.588331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.588357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.588449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.588474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.588556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.588582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.588779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.749 [2024-11-20 09:17:09.588805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.749 qpair failed and we were unable to recover it. 00:26:32.749 [2024-11-20 09:17:09.588885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.588911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.589025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.589053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.589137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.589164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.589272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.589319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.589414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.589440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.589554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.589580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.589697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.589723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.589821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.589847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.589938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.589965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.590068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.590094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.590195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.590234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.590357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.590396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.590515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.590542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.590684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.590711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.590796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.590822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.590917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.590942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.591030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.591056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.591171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.591197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.591326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.591354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.591436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.591462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.591545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.591572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.591692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.591718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.591830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.591855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.591943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.591969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.592063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.592090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.592220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.592259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.592370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.592409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.592511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.592550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.592645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.592673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.592763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.592790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.592903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.592930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.593012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.593039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.593236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.593263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.593388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.593429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.593549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.593576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.593671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.593698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.593811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.750 [2024-11-20 09:17:09.593837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.750 qpair failed and we were unable to recover it. 00:26:32.750 [2024-11-20 09:17:09.593932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.593958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.594051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.594077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.594195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.594222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.594313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.594340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.594436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.594462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.594554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.594583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.594678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.594708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.594798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.594825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.594916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.594942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.595028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.595055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.595136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.595163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.595276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.595312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.595404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.595430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.595531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.595576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.595681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.595709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.595824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.595851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.595974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.595999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.596087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.596112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.596198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.596224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.596340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.596368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.596458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.596485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.596604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.596630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.596738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.596765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.596878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.596904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.596986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.597012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.597099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.597126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.597215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.597241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.597341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.597369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.597452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.597478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.597593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.597619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.597729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.597755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.597844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.597871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.597956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.597982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.598066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.598092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.598219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.598258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.598363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.598392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.751 qpair failed and we were unable to recover it. 00:26:32.751 [2024-11-20 09:17:09.598477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.751 [2024-11-20 09:17:09.598503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.598587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.598612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.598721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.598747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.598831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.598856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.598947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.598975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.599101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.599140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.599268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.599314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.599417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.599447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.599538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.599565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.599653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.599679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.599771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.599798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.599892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.599919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.600004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.600033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.600122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.600148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.600238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.600263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.600359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.600386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.600582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.600608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.600718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.600749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.600889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.600916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.600992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.601018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.601109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.601135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.601227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.601253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.601340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.601366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.601457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.601483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.601568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.601595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.601680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.601705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.601779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.601805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.601893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.601921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.602011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.602040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.602136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.602163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.602257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.602283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.602380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.602407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.752 qpair failed and we were unable to recover it. 00:26:32.752 [2024-11-20 09:17:09.602495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.752 [2024-11-20 09:17:09.602521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.602643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.602671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.602785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.602812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.602892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.602918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.603012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.603038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.603125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.603150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.603268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.603295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.603384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.603410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.603495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.603521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.603653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.603679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.603796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.603822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.603934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.603960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.604048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.604075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.604181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.604221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.604340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.604379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.604470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.604498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.604582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.604608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.604697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.604723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.604834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.604860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.604944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.604969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.605105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.605144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.605242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.605281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.605381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.605410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.605497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.605525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.605661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.605688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.605804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.605830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.605928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.605954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.606077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.606105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.606226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.606253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.606345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.606372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.606502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.606529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.606644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.606671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.606750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.606776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.606863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.606890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.606985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.607011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.607147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.607186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.753 qpair failed and we were unable to recover it. 00:26:32.753 [2024-11-20 09:17:09.607284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.753 [2024-11-20 09:17:09.607318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.607402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.607428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.607523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.607550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.607637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.607664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.607759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.607786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.607873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.607899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.607985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.608011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.608113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.608152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.608246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.608274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.608386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.608425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.608526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.608553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.608641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.608668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.608760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.608786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.608920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.608946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.609059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.609085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.609187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.609227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.609324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.609364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.609457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.609483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.609580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.609605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.609686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.609712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.609796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.609822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.609910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.609937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.610053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.610079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.610162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.610190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.610320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.610347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.610439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.610466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.610548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.610575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.610652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.610679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.610786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.610813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.610925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.610953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.611083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.611110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.611191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.611217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.611321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.611360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.611448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.611476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.611594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.611620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.611715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.754 [2024-11-20 09:17:09.611741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.754 qpair failed and we were unable to recover it. 00:26:32.754 [2024-11-20 09:17:09.611832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.611858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.611985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.612024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.612115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.612142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.612229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.612255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.612346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.612373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.612487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.612514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.612626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.612652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.612771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.612797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.612922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.612951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.613086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.613113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.613205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.613231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.613325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.613352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.613445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.613471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.613558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.613584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.613673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.613699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.613781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.613807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.613923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.613950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.614047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.614075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.614176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.614215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.614320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.614359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.614454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.614487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.614600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.614626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.614716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.614741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.614873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.614901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.615001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.615029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.615119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.615146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.615225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.615252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.615340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.615367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.615450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.615476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.615566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.615592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.615706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.615734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.615832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.615858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.615938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.615965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.616081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.616107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.616203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.616229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.616332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.616371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.755 qpair failed and we were unable to recover it. 00:26:32.755 [2024-11-20 09:17:09.616471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.755 [2024-11-20 09:17:09.616510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.616603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.616631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.616745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.616772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.616858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.616885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.616979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.617005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.617118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.617144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.617278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.617331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.617431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.617458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.617540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.617566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.617645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.617671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.617761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.617786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.617900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.617928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.618043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.618072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.618153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.618180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.618271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.618297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.618391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.618418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.618511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.618538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.618616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.618643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.618755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.618782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.618872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.618899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.618990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.619017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.619100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.619127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.619215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.619242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.619326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.619354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.619445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.619471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.619565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.619591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.619705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.619731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.619847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.619873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.619986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.620011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.620106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.620133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.620231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.620271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.756 qpair failed and we were unable to recover it. 00:26:32.756 [2024-11-20 09:17:09.620406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.756 [2024-11-20 09:17:09.620446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.620546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.620573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.620657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.620683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.620771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.620797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.620922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.620950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.621044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.621072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.621162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.621189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.621286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.621321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.621410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.621436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.621524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.621550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.621637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.621663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.621744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.621770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.621878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.621904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.622001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.622027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.622113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.622141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.622269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.622317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.622428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.622456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.622539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.622565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.622664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.622689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.622806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.622832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.622947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.622979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.623065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.623094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.623182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.623209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.623292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.623326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.623410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.623436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.623533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.623560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.623669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.623696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.623846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.623898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.624053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.624107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.624292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.624372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.624470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.624498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.624633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.624660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.624775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.624801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.624883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.624909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.757 qpair failed and we were unable to recover it. 00:26:32.757 [2024-11-20 09:17:09.625020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.757 [2024-11-20 09:17:09.625047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.625135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.625161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.625266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.625314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.625406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.625434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.625523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.625552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.625676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.625702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.625818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.625844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.625941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.625968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.626082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.626109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.626239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.626279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.626418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.626446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.626539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.626566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.626661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.626688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.626803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.626830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.626914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.626944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.627030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.627058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.627154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.627193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.627284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.627319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.627415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.627442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.627565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.627592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.627682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.627710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.627821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.627849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.628001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.628053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.628188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.628215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.628411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.628437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.628526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.628552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.628654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.628687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.628808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.628834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.628919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.628947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.629027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.629054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.629145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.629173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.629278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.629327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.629438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.629478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.629577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.629604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.629727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.629755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.629861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.758 [2024-11-20 09:17:09.629889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.758 qpair failed and we were unable to recover it. 00:26:32.758 [2024-11-20 09:17:09.629982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.630009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.630087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.630114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.630208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.630237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.630338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.630365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.630456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.630482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.630620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.630645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.630732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.630758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.630868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.630894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.630980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.631006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.631097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.631123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.631237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.631263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.631361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.631390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.631479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.631506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.631615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.631653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.631780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.631808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.631893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.631920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.632000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.632027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.632135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.632166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.632284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.632316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.632411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.632438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.632528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.632556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.632637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.632663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.632751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.632778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.632886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.632914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.632994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.633021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.633124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.633164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.633280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.633323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.633412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.633439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.633531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.633558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.633698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.633725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.633820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.633846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.633941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.633968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.634083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.634111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.634196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.634221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.634314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.634342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.634434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.759 [2024-11-20 09:17:09.634460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.759 qpair failed and we were unable to recover it. 00:26:32.759 [2024-11-20 09:17:09.634545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.634572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.634686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.634712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.634803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.634831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.634923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.634948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.635036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.635062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.635147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.635172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.635254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.635280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.635373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.635402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.635487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.635518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.635598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.635625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.635719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.635746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.635867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.635894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.635987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.636013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.636106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.636133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.636229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.636257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.636379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.636406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.636490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.636516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.636603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.636629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.636711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.636736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.636851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.636876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.636964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.636990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.637070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.637096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.637191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.637219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.637309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.637337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.637461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.637487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.637585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.637611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.637700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.637728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.637841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.637867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.637979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.638007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.638099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.638125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.638233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.638258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.638348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.638376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.638462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.638488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.638578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.638604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.638713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.638739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.638858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.638887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.638977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.639004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.639131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.760 [2024-11-20 09:17:09.639159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.760 qpair failed and we were unable to recover it. 00:26:32.760 [2024-11-20 09:17:09.639286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.639319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.639432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.639459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.639570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.639596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.639705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.639732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.639828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.639853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.639968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.639994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.640089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.640115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.640198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.640224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.640313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.640341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.640452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.640479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.640563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.640598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.640694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.640721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.640809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.640836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.640953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.640980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.641097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.641124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.641215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.641243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.641347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.641397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.641540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.641594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.641682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.641708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.641801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.641828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.641912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.641939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.642044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.642070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.642188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.642214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.642317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.642347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.642446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.642475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.642571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.642597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.642710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.642736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.642814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.642840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.642921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.642947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.643061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.643088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.643200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.643227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.643324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.643352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.643433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.761 [2024-11-20 09:17:09.643460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.761 qpair failed and we were unable to recover it. 00:26:32.761 [2024-11-20 09:17:09.643549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.643588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.643671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.643697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.643816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.643844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.643935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.643962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.644105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.644135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.644230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.644259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.644371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.644398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.644535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.644596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.644791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.644854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.645090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.645117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.645205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.645232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.645321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.645348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.645464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.645491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.645579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.645606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.645697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.645723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.645837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.645862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.645953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.645979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.646068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.646100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.646187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.646212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.646297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.646330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.646415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.646441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.646530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.646556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.646642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.646670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.646788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.646814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.646905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.646933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.647045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.647073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.647194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.647221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.647313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.647340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.647440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.647468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.647594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.647633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.647716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.647744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.647866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.647893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.647976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.648003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.648118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.648145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.648231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.648258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.648346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.648374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.648461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.648487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.762 [2024-11-20 09:17:09.648582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.762 [2024-11-20 09:17:09.648609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.762 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.648729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.648755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.648845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.648873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.649116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.649182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.649381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.649408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.649498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.649526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.649659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.649686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.649786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.649826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.649961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.649989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.650085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.650112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.650196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.650222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.650349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.650375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.650460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.650486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.650581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.650608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.650732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.650758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.650853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.650880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.650999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.651027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.651116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.651143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.651236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.651262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.651410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.651437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.651545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.651586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.651680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.651707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.651788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.651814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.651907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.651933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.652023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.652050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.652165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.652193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.652294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.652345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.652460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.652500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.652598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.652626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.652714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.652742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.652882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.652908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.653084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.653112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.653223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.653250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.653365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.653404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.653501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.653528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.653624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.653649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.653761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.653787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.763 [2024-11-20 09:17:09.653888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.763 [2024-11-20 09:17:09.653918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.763 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.654033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.654061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.654143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.654169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.654260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.654287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.654394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.654420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.654503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.654529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.654615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.654642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.654725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.654751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.654864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.654890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.654976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.655002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.655087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.655118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.655205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.655231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.655331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.655379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.655473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.655501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.655613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.655652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.655775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.655804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.655890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.655917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.656003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.656030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.656112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.656138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.656244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.656284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.656412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.656439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.656525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.656551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.656672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.656697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.656806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.656832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.656954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.656980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.657097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.657122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.657253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.657294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.657436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.657464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.657593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.657621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.657774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.657800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.657900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.657928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.658051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.658078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.658208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.658234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.658375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.658415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.658502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.658529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.658619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.658645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.658725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.764 [2024-11-20 09:17:09.658751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.764 qpair failed and we were unable to recover it. 00:26:32.764 [2024-11-20 09:17:09.658834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.658860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.658938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.658964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.659046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.659073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.659197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.659237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.659366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.659407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.659525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.659552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.659698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.659724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.659902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.659957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.660042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.660069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.660160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.660188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.660267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.660293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.660422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.660448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.660539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.660570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.660681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.660768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.661005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.661031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.661117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.661143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.661231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.661257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.661366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.661392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.661476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.661503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.661619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.661659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.661766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.661796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.661951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.662017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.662324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.662395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.662506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.662533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.662650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.662692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.662807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.662834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.662946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.662985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.663105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.663133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.663247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.663273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.663384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.663411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.663499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.663525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.663607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.663633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.663771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.663797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.663914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.663943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.664032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.664059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.664167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.664193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.664299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.765 [2024-11-20 09:17:09.664331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.765 qpair failed and we were unable to recover it. 00:26:32.765 [2024-11-20 09:17:09.664444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.664470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.664565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.664593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.664682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.664772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.664899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.664964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.665077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.665102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.665221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.665247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.665397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.665423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.665509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.665536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.665629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.665655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.665770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.665795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.665906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.665932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.666049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.666074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.666186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.666212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.666329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.666356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.666495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.666521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.666606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.666632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.666746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.666772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.666909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.666935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.667052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.667078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.667191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.667220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.667309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.667335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.667422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.667447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.667564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.667589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.667674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.667700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.667820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.667845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.667956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.667981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.668080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.668120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.668235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.668263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.668371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.668399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.668489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.668516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.668665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.668692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.668779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.668808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.766 [2024-11-20 09:17:09.668897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.766 [2024-11-20 09:17:09.668924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.766 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.669015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.669041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.669185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.669210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.669335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.669361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.669456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.669482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.669586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.669611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.669704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.669732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.669841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.669867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.669971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.669998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.670088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.670115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.670264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.670312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.670422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.670467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.670701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.670756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.670892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.670948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.671065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.671090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.671183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.671209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.671320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.671346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.671519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.671575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.671740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.671806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.671998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.672026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.672145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.672172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.672288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.672322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.672436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.672463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.672608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.672650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.672783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.672825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.672950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.672975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.673063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.673089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.673211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.673251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.673394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.673434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.673561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.673589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.673672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.673698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.673784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.673812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.673933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.673972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.674142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.674211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.674343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.674417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.674630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.674695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:32.767 [2024-11-20 09:17:09.674921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.767 [2024-11-20 09:17:09.674949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:32.767 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.675041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.675068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.675189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.675216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.675350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.675377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.675470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.675496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.675616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.675642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.675775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.675801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.675883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.675910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.675995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.676021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.676096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.676123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.676226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.676252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.676353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.676380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.676465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.676492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.676597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.676622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.676746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.676771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.676909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.676935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.677045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.677071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.677157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.677184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.677275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.677307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.677436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.677462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.677569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.677594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.677685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.677727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.677822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.677850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.046 qpair failed and we were unable to recover it. 00:26:33.046 [2024-11-20 09:17:09.677930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.046 [2024-11-20 09:17:09.677957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.678085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.678139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.678297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.678332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.678458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.678486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.678587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.678614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.678694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.678721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.678815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.678871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.679096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.679165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.679321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.679358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.679465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.679491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.679617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.679643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.679763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.679789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.679894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.679920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.680017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.680045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.680130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.680222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.680406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.680432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.680521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.680547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.680662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.680688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.680774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.680889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.680953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.681112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.681193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.681424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.681450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.681554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.681580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.681728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.681754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.681861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.681903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.681985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.682012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.682142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.682168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.682271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.682297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.682392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.682417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.682495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.682521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.682663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.682689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.682789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.682816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.682936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.682962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.683049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.683080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.683297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.683383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.683528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.683555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.683792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.683860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.684150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.047 [2024-11-20 09:17:09.684222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.047 qpair failed and we were unable to recover it. 00:26:33.047 [2024-11-20 09:17:09.684438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.684465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.684557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.684584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.684758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.684784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.685056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.685123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.685379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.685407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.685498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.685524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.685637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.685663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.685781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.685807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.685997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.686072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.686263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.686289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.686412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.686437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.686531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.686557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.686712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.686776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.687075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.687139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.687373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.687400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.687500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.687526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.687666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.687692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.687808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.687835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.687928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.687954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.688070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.688097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.688184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.688212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.688335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.688362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.688497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.688537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.688729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.688799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.689101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.689169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.689393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.689462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.689730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.689796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.690053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.690120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.690436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.690502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.690769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.690835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.691047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.691117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.691377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.691444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.691709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.691776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.692068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.692134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.692400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.692467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.692719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.692797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.692997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.693062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.048 [2024-11-20 09:17:09.693346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.048 [2024-11-20 09:17:09.693414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.048 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.693639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.693704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.693960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.694028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.694272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.694354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.694610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.694676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.694883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.694950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.695245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.695324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.695592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.695659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.695854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.695919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.696212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.696277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.696553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.696620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.696917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.696944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.697039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.697066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.697158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.697185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.697281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.697316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.697406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.697464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.697674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.697741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.698028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.698095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.698380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.698448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.698711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.698777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.699070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.699134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.699419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.699487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.699701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.699729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.699869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.699896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.700112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.700178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.700500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.700528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.700651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.700677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.700865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.700932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.701158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.701225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.701463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.701530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.701787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.701853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.702155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.702221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.702484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.702550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.702810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.702875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.703106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.703133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.703271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.703299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.703427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.703454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.049 [2024-11-20 09:17:09.703610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.049 [2024-11-20 09:17:09.703674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.049 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.703962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.704039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.704239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.704293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.704421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.704447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.704669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.704735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.705035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.705101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.705364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.705434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.705680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.705746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.705985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.706012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.706124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.706151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.706238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.706265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.706397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.706437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.706667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.706739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.707042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.707108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.707358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.707424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.707730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.707794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.708050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.708115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.708330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.708399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.708656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.708720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.708944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.709009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.709260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.709286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.709382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.709409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.709553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.709579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.709708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.709780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.709995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.710058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.710321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.710386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.710672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.710736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.710979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.711005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.711122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.711154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.711269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.711295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.711479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.711543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.711806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.711873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.712130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.712192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.712466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.712532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.712761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.712826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.713068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.050 [2024-11-20 09:17:09.713132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.050 qpair failed and we were unable to recover it. 00:26:33.050 [2024-11-20 09:17:09.713387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.713455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.713715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.713781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.714076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.714139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.714387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.714453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.714733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.714797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.715006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.715033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.715191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.715218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.715316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.715342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.715432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.715499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.715761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.715825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.716046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.716073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.716194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.716220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.716316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.716343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.716427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.716453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.716651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.716677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.716784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.716810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.716922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.716948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.717039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.717065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.717254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.717347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.717604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.717681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.717876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.717941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.718166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.718232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.718469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.718534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.718793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.718857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.719091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.719155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.719443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.719511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.719734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.719799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.720087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.720150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.720335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.720400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.720616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.720681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.720920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.720983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.721271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.721347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.721617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.721682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.721941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.722004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.722231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.722295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.051 [2024-11-20 09:17:09.722540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.051 [2024-11-20 09:17:09.722605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.051 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.722820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.722885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.723103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.723168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.723368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.723435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.723728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.723792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.724045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.724071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.724213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.724239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.724344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.724370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.724538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.724601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.724853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.724915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.725198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.725262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.725530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.725594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.725856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.725924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.726170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.726233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.726565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.726630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.726920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.726984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.727220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.727283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.727616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.727681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.727945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.728009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.728251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.728277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.728403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.728430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.728519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.728544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.728656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.728683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.728801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.728827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.729011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.729076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.729337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.729420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.729667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.729732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.730095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.730159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.730377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.730444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.730698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.730763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.731023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.731087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.731373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.731439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.731713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.731777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.732020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.732084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.732332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.732398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.732653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.732718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.733010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.733074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.733372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.733438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.733666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.052 [2024-11-20 09:17:09.733729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.052 qpair failed and we were unable to recover it. 00:26:33.052 [2024-11-20 09:17:09.733998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.734062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.734289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.734366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.734657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.734720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.734977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.735004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.735144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.735171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.735285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.735326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.735554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.735618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.735871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.735936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.736180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.736245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.736553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.736619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.736860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.736924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.737220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.737285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.737555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.737619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.737837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.737915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.738156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.738183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.738297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.738331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.738420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.738446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.738544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.738600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.738826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.738853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.738991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.739017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.739162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.739227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.739469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.739534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.739829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.739892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.740160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.740186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.740313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.740340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.740483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.740508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.740768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.740833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.741107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.741171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.741430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.741498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.741798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.741862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.742146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.742210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.742515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.742580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.742841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.742905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.743192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.743255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.743514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.743578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.743829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.743893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.744146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.744172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.744317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.053 [2024-11-20 09:17:09.744344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.053 qpair failed and we were unable to recover it. 00:26:33.053 [2024-11-20 09:17:09.744454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.744479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.744629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.744692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.744896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.744968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.745265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.745346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.745605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.745669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.745930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.745993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.746274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.746355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.746645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.746709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.746913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.746976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.747211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.747237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.747330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.747357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.747500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.747526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.747645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.747728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.747937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.747988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.748230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.748294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.748593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.748657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.748926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.748989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.749238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.749315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.749579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.749646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.749892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.749955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.750243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.750318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.750579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.750644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.750887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.750951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.751199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.751264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.751529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.751593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.751871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.751934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.752186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.752250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.752554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.752619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.752865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.752931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.753176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.753242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.753556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.753622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.753901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.753964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.754215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.754279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.754576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.754641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.754838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.754905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.755111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.755178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.755438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.755507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.755754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.755820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.054 [2024-11-20 09:17:09.756064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.054 [2024-11-20 09:17:09.756131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.054 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.756360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.756425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.756713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.756777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.757018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.757082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.757336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.757401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.757638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.757703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.757982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.758047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.758268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.758345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.758636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.758699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.758973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.759036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.759337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.759404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.759651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.759714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.759967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.760030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.760219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.760285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.760519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.760583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.760855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.760920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.761177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.761242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.761518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.761583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.761826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.761894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.762126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.762190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.762436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.762503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.762747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.762811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.763100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.763164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.763461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.763527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.763782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.763846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.764132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.764196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.764501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.764566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.764808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.764872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.765169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.765233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.765540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.765604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.765821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.765886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.766139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.766204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.766458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.766534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.766818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.766881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.767176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.055 [2024-11-20 09:17:09.767241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.055 qpair failed and we were unable to recover it. 00:26:33.055 [2024-11-20 09:17:09.767565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.767631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.767921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.767984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.768284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.768368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.768584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.768647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.768935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.768998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.769202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.769267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.769553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.769618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.769870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.769933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.770190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.770256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.770501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.770566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.770854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.770917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.771185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.771250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.771548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.771613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.771853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.771917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.772116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.772180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.772409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.772475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.772751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.772815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.773107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.773171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.773404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.773470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.773700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.773764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.773975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.774039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.774329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.774394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.774699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.774763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.775046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.775110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.775398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.775475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.775772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.775837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.776052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.776116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.776383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.776449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.776664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.776727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.776975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.777039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.777295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.777399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.777663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.777726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.778011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.778075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.778331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.778405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.778614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.778677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.778931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.778995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.779205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.779269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.779520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.779583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.056 [2024-11-20 09:17:09.779882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.056 [2024-11-20 09:17:09.779947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.056 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.780197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.780261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.780541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.780604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.780860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.780924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.781122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.781187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.781469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.781534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.781784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.781848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.782098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.782162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.782347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.782411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.782661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.782725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.782916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.782978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.783223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.783287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.783566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.783630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.783891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.783964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.784251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.784326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.784596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.784660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.784946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.785009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.785276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.785370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.785625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.785688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.785937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.786000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.786250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.786331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.786628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.786691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.786934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.787000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.787203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.787268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.787531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.787594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.787798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.787861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.788109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.788174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.788437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.788504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.788698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.788761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.789046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.789109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.789298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.789378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.789597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.789660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.789905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.789968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.790175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.790242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.790522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.790587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.790873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.790937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.791233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.791297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.791583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.791646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.791893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.057 [2024-11-20 09:17:09.791957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.057 qpair failed and we were unable to recover it. 00:26:33.057 [2024-11-20 09:17:09.792201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.792265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.792533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.792597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.792863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.792928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.793180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.793244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.793524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.793592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.793837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.793901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.794110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.794175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.794433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.794500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.794697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.794764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.795048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.795112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.795301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.795381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.795651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.795717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.795970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.796033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.796259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.796336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.796570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.796634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.796886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.796961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.797203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.797267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.797552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.797616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.797873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.797937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.798204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.798268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.798576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.798641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.798891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.798955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.799241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.799321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.799583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.799648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.799866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.799929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.800174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.800238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.800525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.800590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.800833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.800897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.801105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.801168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.801453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.801519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.801799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.801863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.802140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.802204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.802501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.802567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.802810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.802874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.803119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.803183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.803476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.803541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.803835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.803899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.804155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.804220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.058 qpair failed and we were unable to recover it. 00:26:33.058 [2024-11-20 09:17:09.804461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.058 [2024-11-20 09:17:09.804526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.804809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.804873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.805129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.805194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.805465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.805531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.805784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.805861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.806103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.806168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.806468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.806534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.806797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.806860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.807150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.807215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.807428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.807496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.807748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.807813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.808086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.808151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.808410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.808475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.808720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.808786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.809023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.809086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.809295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.809389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.809684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.809747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.810004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.810067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.810339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.810405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.810665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.810728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.810957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.811020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.811282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.811364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.811611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.811675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.811918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.811982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.812263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.812338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.812637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.812700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.813003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.813066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.813271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.813348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.813641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.813704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.813959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.814023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.814352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.814418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.814709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.814786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.815039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.815106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.815368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.815434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.815653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.815716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.816001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.816065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.816273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.816350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.816561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.816633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.059 [2024-11-20 09:17:09.816896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-20 09:17:09.816960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.059 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.817173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.817236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.817482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.817549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.817848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.817913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.818198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.818264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.818532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.818598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.818849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.818914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.819179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.819243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.819520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.819590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.819871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.819935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.820233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.820296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.820610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.820674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.820930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.820995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.821288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.821369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.821656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.821721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.821948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.822013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.822294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.822391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.822642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.822706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.822962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.823025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.823280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.823370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.823669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.823733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.823990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.824055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.824293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.824376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.824570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.824646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.824876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.824941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.825200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.825264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.825527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.825593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.060 [2024-11-20 09:17:09.825803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.060 [2024-11-20 09:17:09.825867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.060 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.826114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.826178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.826438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.826503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.826759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.826823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.827044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.827108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.827365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.827430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.827640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.827706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.827963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.828028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.828282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.828361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.828640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.828704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.828997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.829061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.829363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.829428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.829652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.829717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.829965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.830030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.830276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.830360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.830605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.830668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.830964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.831027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.831244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.831325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.831580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.831647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.831934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.831997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.832240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.832322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.832540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.832606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.832819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.832883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.833169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.833233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.833462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.833527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.833821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.833885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.834145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.834210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.834460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.834526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.834793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.834857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.835090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.835154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.835412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.835479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.835764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.835827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.836082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.836147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.836437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.836504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.836786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.836861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.837113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.837177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.837450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.837515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.837710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.837774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.837972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.061 [2024-11-20 09:17:09.838038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.061 qpair failed and we were unable to recover it. 00:26:33.061 [2024-11-20 09:17:09.838333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.838400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.838617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.838682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.838867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.838930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.839154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.839218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.839492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.839560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.839855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.839918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.840217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.840280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.840570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.840636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.840897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.840961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.841268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.841352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.841643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.841708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.841965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.842029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.842292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.842375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.842617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.842681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.842955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.843019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.843276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.843371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.843578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.843643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.843848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.843912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.844171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.844235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.844462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.844530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.844786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.844850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.845099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.845162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.845377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.845453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.845747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.845812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.846116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.846180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.846433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.846499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.846713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.846780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.847070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.847134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.847381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.847446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.847733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.847796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.848041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.848109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.848413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.848477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.848732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.848796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.849081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.849145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.849401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.849467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.849762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.849826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.850125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.850190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.850374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.850441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.062 qpair failed and we were unable to recover it. 00:26:33.062 [2024-11-20 09:17:09.850671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.062 [2024-11-20 09:17:09.850734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.850991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.851055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.851324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.851390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.851609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.851676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.851927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.851991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.852237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.852315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.852530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.852596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.852892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.852956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.853243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.853324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.853582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.853647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.853902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.853966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.854149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.854224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.854500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.854565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.854863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.854928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.855223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.855287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.855581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.855645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.855925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.855990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.856278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.856361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.856609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.856673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.856924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.856989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.857236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.857300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.857588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.857653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.857956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.858020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.858278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.858362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.858622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.858686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.858913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.858978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.859216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.859280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.859554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.859619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.859875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.859940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.860228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.860292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.860524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.860588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.860826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.860891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.861143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.861207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.861433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.861499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.861753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.861818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.862037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.862100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.862362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.862427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.862643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.862707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.063 [2024-11-20 09:17:09.862938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.063 [2024-11-20 09:17:09.863001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.063 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.863293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.863369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.863594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.863659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.863906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.863974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.864269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.864347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.864617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.864680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.864963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.865027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.865271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.865347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.865615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.865678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.865980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.866043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.866356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.866422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.866634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.866701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.866938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.867002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.867247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.867322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.867583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.867658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.867951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.868013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.868296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.868375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.868642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.868709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.868916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.868980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.869239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.869314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.869540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.869604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.869817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.869884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.870156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.870219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.870500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.870565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.870762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.870826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.871083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.871147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.871374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.871440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.871662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.871726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.872033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.872097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.872355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.872422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.872715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.872777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.873031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.873094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.873382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.873447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.873705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.873769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.874026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.874089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.064 qpair failed and we were unable to recover it. 00:26:33.064 [2024-11-20 09:17:09.874323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.064 [2024-11-20 09:17:09.874388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.874637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.874704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.874992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.875057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.875267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.875349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.875576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.875642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.875892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.875956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.876179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.876253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.876613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.876679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.876973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.877038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.877324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.877389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.877651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.877715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.877975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.878040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.878336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.878401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.878650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.878713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.878926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.878991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.879206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.879269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.879564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.879628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.879853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.879916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.880193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.880257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.880564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.880628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.880936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.880999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.881300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.881383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.881644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.881708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.881960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.882023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.882281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.882368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.882592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.882659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.882906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.882970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.883272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.883355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.883612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.883677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.883984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.884047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.884299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.884382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.884641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.884705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.884921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.884984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.885282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.885376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.885633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.885699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.885921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.885984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.886223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.886287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.886612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.886675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.065 [2024-11-20 09:17:09.886921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.065 [2024-11-20 09:17:09.886985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.065 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.887262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.887346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.887556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.887620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.887915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.887979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.888228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.888292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.888609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.888673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.888886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.888950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.889218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.889284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.889593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.889658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.889877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.889941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.890229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.890293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.890609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.890683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.890933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.890997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.891245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.891327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.891563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.891627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.891878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.891943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.892211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.892275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.892526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.892590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.892794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.892858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.893063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.893126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.893343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.893409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.893696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.893761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.894025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.894089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.894363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.894428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.894716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.894781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.895029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.895095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.895380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.895446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.895739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.895803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.896039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.896105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.896359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.896425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.896665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.896729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.897024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.897087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.897365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.897431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.897700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.897765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.897994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.898058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.898282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.898365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.898663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.898729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.899012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.899073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.899269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.066 [2024-11-20 09:17:09.899346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.066 qpair failed and we were unable to recover it. 00:26:33.066 [2024-11-20 09:17:09.899584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.899649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.899909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.899975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.900220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.900285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.900525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.900590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.900892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.900956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.901158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.901222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.901462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.901529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.901745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.901809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.902089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.902152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.902390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.902455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.902708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.902772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.902981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.903046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.903266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.903344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.903634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.903699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.903995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.904059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.904361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.904427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.904719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.904784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.905051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.905115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.905360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.905425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.905684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.905749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.906002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.906069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.906335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.906401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.906671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.906736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.907023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.907089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.907368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.907444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.907688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.907753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.908002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.908067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.908335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.908404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.908657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.908721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.908977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.909045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.909342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.909408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.909665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.909730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.909941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.910008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.910294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.910374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.910656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.910720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.910942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.911007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.911284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.911386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.911678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.911743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.067 qpair failed and we were unable to recover it. 00:26:33.067 [2024-11-20 09:17:09.911985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.067 [2024-11-20 09:17:09.912051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.912335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.912402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.912665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.912732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.912980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.913044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.913295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.913376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.913666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.913731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.914018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.914081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.914375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.914441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.914643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.914706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.914905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.914969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.915198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.915263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.915520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.915586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.915887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.915951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.916260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.916351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.916572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.916636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.916927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.916991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.917288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.917378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.917585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.917651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.917913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.917978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.918232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.918297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.918585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.918653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.918935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.918999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.919278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.919367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.919622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.919685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.919941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.920005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.920237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.920320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.920624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.920688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.920954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.921018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.921299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.921384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.921641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.921706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.921904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.921968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.922250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.922342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.922574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.922641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.922924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.922989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.923197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.923261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.923484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.923550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.923772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.923837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.924071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.924135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.068 [2024-11-20 09:17:09.924376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.068 [2024-11-20 09:17:09.924441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.068 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.924718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.924783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.925023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.925087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.925359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.925425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.925690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.925755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.925998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.926062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.926326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.926392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.926689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.926753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.927052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.927117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.927348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.927412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.927670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.927734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.928014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.928079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.928335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.928401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.928637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.928701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.928965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.929031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.929280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.929365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.929649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.929714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.929979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.930044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.930337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.930402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.930602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.930666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.930924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.930989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.931203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.931266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.931562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.931628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.931933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.931996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.932276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.932365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.932602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.932668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.932873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.932936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.933169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.933232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.933522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.933588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.933774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.933838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.934107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.934171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.934364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.934431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.934689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.934753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.934965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.935029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.935328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.069 [2024-11-20 09:17:09.935396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.069 qpair failed and we were unable to recover it. 00:26:33.069 [2024-11-20 09:17:09.935680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.935743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.935991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.936054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.936246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.936322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.936582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.936647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.936903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.936967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.937200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.937264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.937523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.937590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.937835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.937899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.938098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.938172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.938426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.938494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.938747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.938812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.939059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.939123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.939357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.939424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.939727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.939791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.940080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.940144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.940348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.940413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.940717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.940781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.941064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.941128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.941368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.941434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.941672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.941736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.941982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.942049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.942321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.942387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.942651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.942715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.942920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.942988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.943225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.943289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.943549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.943613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.943866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.943930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.944223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.944287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.944541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.944606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.944887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.944951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.945181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.945245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.945515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.945580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.945806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.945870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.946166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.946232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.946506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.946571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.946768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.946842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.947076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.947139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.070 [2024-11-20 09:17:09.947397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.070 [2024-11-20 09:17:09.947463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.070 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.947704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.947768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.948025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.948091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.948357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.948423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.948712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.948775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.949027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.949090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.949343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.949409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.949702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.949766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.950014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.950077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.950331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.950397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.950666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.950731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.950987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.951051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.951349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.951417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.951671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.951736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.951952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.952017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.952297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.952379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.952608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.952673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.952896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.952963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.953174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.953238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.953474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.953540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.953833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.953898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.954172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.954236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.954467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.954534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.954832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.954896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.955182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.955246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.955560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.955636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.955932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.955995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.956284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.956368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.956590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.956654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.956909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.956972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.957234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.957298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.957575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.957640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.957926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.957990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.958285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.958369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.958607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.958671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.958919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.958985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.959240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.959326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.959557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.959622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.071 [2024-11-20 09:17:09.959906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.071 [2024-11-20 09:17:09.959969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.071 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.960231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.960295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.960580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.960644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.960909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.960973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.961159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.961224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.961471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.961536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.961753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.961819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.962034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.962099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.962354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.962421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.962672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.962736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.962931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.962996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.963241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.963318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.963585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.963650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.963849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.963916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.964165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.964228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.964471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.964536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.964751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.964814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.965073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.965136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.965405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.965470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.965761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.965823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.966039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.966103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.966340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.966407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.966696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.966758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.967001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.967064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.967291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.967388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.967604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.967670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.967916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.967980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.968214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.968278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.968581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.968646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.968939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.969003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.969241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.969318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.969585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.969649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.969889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.969953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.970208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.970271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.970558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.970623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.970877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.970941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.971176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.971239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.971570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.971636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.971883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.072 [2024-11-20 09:17:09.971949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.072 qpair failed and we were unable to recover it. 00:26:33.072 [2024-11-20 09:17:09.972229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.972293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.972592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.972658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.972948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.973012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.973268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.973354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.973658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.973721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.973928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.973991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.974229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.974293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.974615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.974679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.974934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.974998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.975280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.975376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.975632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.975695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.975972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.976036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.976280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.976381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.976635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.976699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.976948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.977014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.977322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.977388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.977676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.977751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.978050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.978114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.978405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.978472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.978734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.978801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.979090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.979154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.979399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.979465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.979716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.979781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.980040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.980104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.980354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.980420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.980676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.980741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.981042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.981106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.981366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.981432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.981683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.981746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.982045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.982109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.982412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.982478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.982700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.982764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.983052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.983116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.983410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.983476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.983761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.983824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.984108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.984172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.984421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.984487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.073 [2024-11-20 09:17:09.984740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.073 [2024-11-20 09:17:09.984804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.073 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.985100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.985164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.985466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.985531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.985777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.985844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.986151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.986215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.986529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.986595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.986847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.986922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.987167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.987234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.987554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.987619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.987839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.987903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.988153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.988216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.988526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.988591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.988855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.988919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.989213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.989277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.989555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.989620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.989843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.989907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.990150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.990213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.990471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.990537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.990779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.990845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.991106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.991170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.991443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.991511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.991757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.991822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.992105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.992168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.992419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.992487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.992753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.992819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.993062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.993127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.993381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.993447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.993726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.993791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.994083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.994147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.994434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.994500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.994720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.994785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.995011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.995075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.995377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.995442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.995658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.995724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.995941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.074 [2024-11-20 09:17:09.996007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.074 qpair failed and we were unable to recover it. 00:26:33.074 [2024-11-20 09:17:09.996246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.996332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.996563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.996629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.996883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.996947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.997231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.997296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.997570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.997636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.997872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.997937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.998187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.998251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.998516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.998581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.998886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.998949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.999245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.999327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.999588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.999652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:09.999928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:09.999993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.000228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.000296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.000540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.000607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.000863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.000929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.001271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.001353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.001523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.001567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.001683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.001715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.001844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.001875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.002023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.002055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.002258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.002291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.002446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.002478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.002616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.002648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.002762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.002794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.002901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.002932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.003053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.003086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.003229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.003260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.003388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.003421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.003556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.003590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.003736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.003768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.003880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.003914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.004029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.004062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.004236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.004268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.004411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.004444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.004585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.004617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.004721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.004754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.004890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.004923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.075 [2024-11-20 09:17:10.005067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.075 [2024-11-20 09:17:10.005116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.075 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.005279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.005339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.005496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.005534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.005700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.005736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.005875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.005923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.006063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.006098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.006210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.006245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.006381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.006414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.006539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.006572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.006791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.006857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.007086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.007150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.007383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.007417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.007535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.007567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.007727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.007762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.007963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.008031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.008287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.008331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.008487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.008520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.008634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.008666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.008874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.008908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.009052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.009087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.009331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.009381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.009487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.009519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.009668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.009702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.009932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.009984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.010206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.010270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.010456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.010489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.010602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.010635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.010768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.010832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.011081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.011147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.011419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.011458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.011599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.011632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.011897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.011963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.012249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.012346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.012474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.012508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.012658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.012691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.012847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.012903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.013202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.013234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.013374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.013408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.013544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.076 [2024-11-20 09:17:10.013600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.076 qpair failed and we were unable to recover it. 00:26:33.076 [2024-11-20 09:17:10.013807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.013875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.014134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.014166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.014419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.014452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.014573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.014620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.014809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.014849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.015033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.015073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.015206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.015246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.015402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.015442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.015603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.015652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.015807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.015863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.016030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.016076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.016247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.016298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.016503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.016552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.016722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.016760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.016876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.016911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.017099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.017154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.017386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.017422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.017570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.017632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.017832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.017909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.018196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.018268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.018486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.018524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.018719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.018778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.019011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.019070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.019277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.019370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.019504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.019539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.019729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.019792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.020078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.020137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.020380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.020416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.020534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.020570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.020839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.020896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.021133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.021192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.021463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.021499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.021623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.021659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.021900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.021959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.022162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.022223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.022513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.022560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.022735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.022791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.077 [2024-11-20 09:17:10.022996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.077 [2024-11-20 09:17:10.023055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.077 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.023352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.023399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.023597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.023669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.023931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.024009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.024275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.024368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.024548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.024593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.024877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.024954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.025206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.025261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.025525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.025571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.025776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.025856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.026151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.026230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.026481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.026527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.026717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.026784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.027075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.027131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.027411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.027466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.027573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.027609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.027808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.027887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.028139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.028199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.028422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.028469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.028676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.028714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.028891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.028967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.029216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.029276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.029485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.029531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.029736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.029791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.030031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.030090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.030371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.030419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.030628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.030707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.030984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.031039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.031279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.031371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.031568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.031613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.031803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.031873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.032137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.032195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.032437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.032486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.032723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.032806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.033018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.078 [2024-11-20 09:17:10.033078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.078 qpair failed and we were unable to recover it. 00:26:33.078 [2024-11-20 09:17:10.033364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.033414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.033600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.033670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.033997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.034053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.034283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.034341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.034561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.034630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.034898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.034972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.035244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.035317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.035530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.035577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.035767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.035838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.036041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.036097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.036557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.036616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.036777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.036813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.036939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.036975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.037140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.037211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.037449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.037497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.037662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.037709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.037893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.037963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.038203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.038263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.038492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.038538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.038818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.038854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.039000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.039035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.039151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.039187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.039367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.039426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.039609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.039656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.039858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.039914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.040205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.040265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.040467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.040513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.040758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.040818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.041058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.041124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.041394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.041440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.041598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.041632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.041782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.041817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.041964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.042000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.042135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.042170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.042385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.042431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.042613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.042676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.042958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.043017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.043246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.043322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.079 qpair failed and we were unable to recover it. 00:26:33.079 [2024-11-20 09:17:10.043533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.079 [2024-11-20 09:17:10.043579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.043821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.043881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.044153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.044229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.044470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.044517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.044688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.044733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.044878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.044925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.045194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.045250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.045490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.045538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.045848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.045894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.046070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.046116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.046337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.046384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.046518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.046563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.046859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.046937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.047184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.047239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.047451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.047498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.047775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.047811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.047989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.048024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.048148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.048185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.048421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.048467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.048621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.048666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.048851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.048924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.049182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.049236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.049474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.049521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.049661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.049708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.049942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.050003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.050182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.050244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.050506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.050562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.050814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.050875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.051107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.051182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.051368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.051424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.051615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.051670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.051948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.051983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.052129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.052165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.052427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.052507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.052798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.052877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.053060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.053123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.053392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.053449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.053701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.053760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.053994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.080 [2024-11-20 09:17:10.054054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.080 qpair failed and we were unable to recover it. 00:26:33.080 [2024-11-20 09:17:10.054327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.081 [2024-11-20 09:17:10.054388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.081 qpair failed and we were unable to recover it. 00:26:33.081 [2024-11-20 09:17:10.054609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.081 [2024-11-20 09:17:10.054699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.081 qpair failed and we were unable to recover it. 00:26:33.081 [2024-11-20 09:17:10.054916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.081 [2024-11-20 09:17:10.054972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.081 qpair failed and we were unable to recover it. 00:26:33.081 [2024-11-20 09:17:10.055173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.055228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.055427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.055504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.055782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.055842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.056116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.056177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.056403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.056464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.056710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.056765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.056943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.057000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.057244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.057299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.057633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.057689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.057881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.057936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.058129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.058184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.058424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.058503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.058764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.058842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.059012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.059072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.059357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.059418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.059671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.059727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.059991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.060070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.060320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.060381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.060590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.060674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.060964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.060999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.362 [2024-11-20 09:17:10.061144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.362 [2024-11-20 09:17:10.061179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.362 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.061319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.061356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.061687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.061767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.062027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.062103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.062341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.062402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.062625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.062704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.062992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.063069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.063264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.063339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.063550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.063621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.063878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.063961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.064226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.064297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.064571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.064646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.064946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.065019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.065299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.065449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.065765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.065835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.066090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.066191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.066476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.066537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.066734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.066789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.067036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.067096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.067335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.067404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.067616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.067675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.067915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.067975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.068265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.068334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.068584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.068645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.068870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.068947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.069175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.069237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.069584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.069646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.069886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.069964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.070213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.070273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.070609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.070687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.070986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.071041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.071335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.071396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.071684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.071745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.072004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.072083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.072259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.072335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.363 qpair failed and we were unable to recover it. 00:26:33.363 [2024-11-20 09:17:10.072603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.363 [2024-11-20 09:17:10.072691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.072870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.072930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.073141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.073199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.073474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.073554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.073872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.073927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.074194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.074255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.074538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.074595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.074819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.074875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.075056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.075110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.075295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.075369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.075566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.075645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.075885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.075963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.076226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.076285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.076555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.076632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.076937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.077014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.077192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.077251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.077532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.077611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.077870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.077949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.078192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.078252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.078366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190df30 (9): Bad file descriptor 00:26:33.364 [2024-11-20 09:17:10.078806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.078906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.079209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.079281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.079572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.079656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.079976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.080043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.080329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.080388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.080702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.080770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.081066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.081125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.081383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.081447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.081718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.081778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.082062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.082127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.082399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.082462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.082644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.082710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.083019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.083084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.083346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.083405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.083600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.083684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.083941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.084007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.084283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.084353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.364 [2024-11-20 09:17:10.084534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-11-20 09:17:10.084614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.364 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.084944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.085004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.085245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.085329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.085561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.085646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.085971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.086049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.086321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.086404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.086659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.086718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.086940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.086996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.087211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.087279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.087583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.087667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.087955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.088012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.088238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.088322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.088594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.088678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.088933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.088994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.089269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.089343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.089558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.089648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.089924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.089982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.090221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.090290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.090600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.090656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.090924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.090984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.091229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.091296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.091554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.091632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.091873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.091934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.092192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.092249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.092490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.092547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.092798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.092859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.093177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.093243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.093477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.093539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.093866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.093922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.094173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.094243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.094495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.094557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.094831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.094900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.365 [2024-11-20 09:17:10.095161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-11-20 09:17:10.095228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.365 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.095486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.095544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.095790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.095861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.096157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.096223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.096495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.096561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.096828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.096894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.097142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.097209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.097531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.097598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.097850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.097916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.098179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.098246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.098546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.098613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.098847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.098913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.099125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.099203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.099531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.099600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.099889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.099955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.100163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.100239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.100538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.100607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.100911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.100976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.101205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.101272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.101566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.101635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.101925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.101991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.102200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.102266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.102497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.102564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.102854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.102919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.103156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.103223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.103532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.103600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.103900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.103966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.104268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.104356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.104624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.104691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.104952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.105017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.105324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.105391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.105650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.105718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.105973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.106039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.106339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.106407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.366 [2024-11-20 09:17:10.106676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-11-20 09:17:10.106744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.366 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.106990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.107056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.107288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.107374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.107637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.107703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.107951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.108016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.108249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.108335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.108617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.108674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.108915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.108982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.109287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.109368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.109640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.109706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.109972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.110031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.110282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.110381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.110610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.110676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.110946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.111012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.111283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.111384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.111617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.111684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.111897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.111965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.112219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.112284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.112567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.112645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.112895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.112961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.113245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.113329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.113597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.113664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.113886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.113951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.114220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.114288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.114631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.114697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.114953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.115020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.115236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.115335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.115602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.115671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.115957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.367 [2024-11-20 09:17:10.116022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.367 qpair failed and we were unable to recover it. 00:26:33.367 [2024-11-20 09:17:10.116338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.116407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.116665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.116731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.116960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.117024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.117277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.117364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.117618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.117687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.117942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.118006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.118329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.118396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.118624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.118690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.118984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.119049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.119363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.119430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.119727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.119791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.120084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.120151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.120422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.120491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.120767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.120833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.121093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.121159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.121395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.121465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.121783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.121849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.122103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.122169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.122467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.122535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.122784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.122849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.123144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.123209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.123504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.123572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.123813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.123878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.124166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.124233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.124514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.124582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.124794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.124828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.125071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.125140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.125426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.125461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.125621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.125688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.125965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.126042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.126356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.126390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.126500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.126536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.368 [2024-11-20 09:17:10.126652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.368 [2024-11-20 09:17:10.126688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.368 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.126836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.126871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.127007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.127044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.127231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.127298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.127456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.127490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.127622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.127656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.127831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.127896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.128169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.128236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.128475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.128511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.128709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.128776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.129065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.129131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.129401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.129437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.129586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.129661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.129945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.130001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.130318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.130381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.130509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.130543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.130731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.130797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.131050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.131120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.131418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.131455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.131583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.131617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.131725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.131760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.131922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.131988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.132245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.132328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.132597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.132664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.132987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.133022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.133134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.133170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.133422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.133457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.133601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.133634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.133896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.133931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.134076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.134110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.134399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.134467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.134718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.134799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.135091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.135158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.369 [2024-11-20 09:17:10.135414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.369 [2024-11-20 09:17:10.135481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.369 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.135721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.135791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.136049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.136119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.136383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.136451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.136705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.136782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.136999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.137063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.137337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.137417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.137709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.137775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.138065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.138130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.138395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.138463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.138761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.138826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.139119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.139185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.139450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.139517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.139781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.139845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.140056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.140090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.140247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.140282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.140442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.140476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.140596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.140631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.140885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.140952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.141239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.141320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.141580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.141649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.141950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.142019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.142216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.142283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.142562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.142630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.142931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.142997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.143282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.143381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.143633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.143701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.143952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.144017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.144252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.144337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.144602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.144667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.144971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.145037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.145322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.145396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.370 [2024-11-20 09:17:10.145691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.370 [2024-11-20 09:17:10.145756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.370 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.146050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.146116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.146372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.146439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.146798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.146863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.147152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.147218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.147465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.147536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.147773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.147850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.148114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.148178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.148450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.148518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.148768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.148833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.149080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.149146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.149362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.149432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.149736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.149814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.150074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.150141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.150437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.150473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.150591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.150626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.150809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.150874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.151076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.151142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.151388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.151455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.151712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.151780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.152076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.152141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.152393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.152460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.152739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.152806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.153057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.153122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.153385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.153421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.153573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.153607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.153859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.153925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.154158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.154224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.154473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.154540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.154769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.154803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.154949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.154984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.155105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.155139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.155439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.155507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.155762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.155828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.156081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.156147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.156406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.156473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.156733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.156799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.157093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.157159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.371 [2024-11-20 09:17:10.157432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.371 [2024-11-20 09:17:10.157499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.371 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.157712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.157779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.158050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.158117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.158379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.158446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.158771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.158837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.159139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.159205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.159490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.159558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.159773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.159841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.160099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.160164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.160380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.160448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.160700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.160766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.161056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.161121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.161374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.161442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.161654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.161724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.161992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.162069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.162372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.162438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.162721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.162787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.163050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.163118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.163378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.163445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.163708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.163773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.164014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.164082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.164338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.164406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.164674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.164739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.165009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.165075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.165288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.165379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.165675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.165742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.166037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.166072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.166185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.166220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.166397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.166433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.166588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.166650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.166934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.166998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.372 [2024-11-20 09:17:10.167255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.372 [2024-11-20 09:17:10.167337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.372 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.167594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.167629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.167771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.167832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.168103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.168169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.168430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.168497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.168757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.168823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.169081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.169150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.169420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.169487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.169786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.169853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.170066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.170132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.170464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.170516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.170648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.170687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.170832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.170866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.171017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.171051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.171197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.171231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.171351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.171388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.171531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.171565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.171706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.171741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.171846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.171881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.171998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.172033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.172137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.172171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.172299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.172341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.172454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.172488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.172617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.172665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.172780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.172814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.172982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.173017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.173163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.173197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.173332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.173367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.173491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.173525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.173655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.173688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.173824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.173858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.174028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.174063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.174196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.174230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.174336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.174371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.174520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.174554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.174712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.174745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.174879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.174913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.175054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.175089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.373 [2024-11-20 09:17:10.175229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.373 [2024-11-20 09:17:10.175263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.373 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.175393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.175428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.175562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.175597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.175733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.175767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.175885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.175919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.176104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.176171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.176441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.176507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.176754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.176819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.177088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.177156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.177412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.177480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.177697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.177763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.177968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.178034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.178261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.178343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.178563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.178630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.178922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.178986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.179285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.179374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.179600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.179667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.179882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.179950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.180173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.180238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.180517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.180585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.180832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.180897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.181193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.181257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.181514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.181584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.181845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.181913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.182161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.182230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.182528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.182607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.182824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.182890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.183110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.183177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.183442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.183509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.183728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.183794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.184036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.184104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.184376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.184443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.184695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.184761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.185011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.185078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.185337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.185403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.185697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.185762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.186051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.186118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.186378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.374 [2024-11-20 09:17:10.186445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.374 qpair failed and we were unable to recover it. 00:26:33.374 [2024-11-20 09:17:10.186696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.186765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.187088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.187155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.187459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.187526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.187784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.187849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.188075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.188145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.188405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.188471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.188720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.188785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.189078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.189145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.189406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.189473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.189764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.189829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.190089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.190156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.190408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.190478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.190770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.190836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.191055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.191121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.191422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.191491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.191751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.191819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.192027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.192093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.192334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.192401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.192664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.192731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.192985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.193049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.193245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.193349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.193648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.193714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.193961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.194027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.194284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.194368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.194657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.194722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.195005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.195071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.195333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.195402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.195698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.195774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.196066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.196131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.196391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.196459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.196715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.196782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.197083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.197147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.197365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.197435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.197734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.197798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.198015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.198080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.198352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.198421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.198673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.198738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.198993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.375 [2024-11-20 09:17:10.199058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.375 qpair failed and we were unable to recover it. 00:26:33.375 [2024-11-20 09:17:10.199363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.199430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.199648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.199717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.199935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.200004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.200321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.200389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.200653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.200719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.200968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.201034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.201285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.201388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.201654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.201720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.201942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.202011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.202265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.202355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.202658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.202723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.202980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.203047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.203329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.203395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.203658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.203723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.204024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.204091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.204389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.204456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.204708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.204776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.205024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.205091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.205396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.205464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.205707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.205772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.206031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.206096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.206339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.206408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.206680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.206745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.207038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.207103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.207332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.207399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.207649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.207719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.207923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.207988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.208233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.208298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.208586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.208652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.208896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.208973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.209220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.209285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.209587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.209653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.209949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.210014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.210240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.210325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.210568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.210634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.210924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.210988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.211228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.211295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.376 [2024-11-20 09:17:10.211610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.376 [2024-11-20 09:17:10.211675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.376 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.211927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.211992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.212282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.212366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.212660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.212725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.213033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.213097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.213361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.213428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.213693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.213758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.214016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.214080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.214331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.214397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.214688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.214754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.215007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.215073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.215299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.215380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.215675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.215739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.215984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.216051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.216299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.216380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.216606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.216671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.216900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.216965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.217245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.217346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.217614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.217678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.217896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.217961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.218229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.218294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.218573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.218637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.218888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.218956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.219209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.219276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.219591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.219656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.219953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.220019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.220244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.220328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.220599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.220663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.220953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.221018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.221369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.221435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.221732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.221797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.222010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.222074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.222369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.222447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.222673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.222738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.377 [2024-11-20 09:17:10.222952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.377 [2024-11-20 09:17:10.223016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.377 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.223324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.223390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.223608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.223676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.223940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.224008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.224258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.224350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.224608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.224674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.224925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.224993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.225280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.225361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.225612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.225678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.225885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.225949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.226202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.226267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.226557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.226623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.226933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.227000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.227251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.227338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.227590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.227657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.227947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.228013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.228333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.228401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.228655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.228721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.228916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.228984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.229287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.229372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.229610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.229676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.229927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.229991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.230280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.230373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.230632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.230697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.230953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.231018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.231278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.231367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.231618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.231687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.231940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.232009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.232206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.232271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.232554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.232619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.232911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.232977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.233276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.233359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.233630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.233695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.233955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.234020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.234336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.234402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.234704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.234769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.234987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.235053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.235332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.378 [2024-11-20 09:17:10.235399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.378 qpair failed and we were unable to recover it. 00:26:33.378 [2024-11-20 09:17:10.235626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.235702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.235999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.236064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.236354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.236421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.236712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.236777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.237021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.237086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.237352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.237419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.237683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.237749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.238013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.238079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.238367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.238432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.238679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.238746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.239008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.239076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.239338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.239404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.239703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.239769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.240038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.240104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.240376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.240443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.240693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.240758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.241003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.241070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.241321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.241389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.241643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.241708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.242000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.242065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.242328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.242397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.242621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.242688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.242936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.243002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.243199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.243265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.243512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.243577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.243871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.243936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.244221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.244286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.244591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.244669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.244928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.244994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.245289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.245375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.245626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.245696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.245954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.246019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.246336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.246405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.246651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.246718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.246970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.247035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.247300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.247387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.247611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.247678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.379 [2024-11-20 09:17:10.247891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.379 [2024-11-20 09:17:10.247956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.379 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.248240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.248326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.248577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.248644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.248903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.248972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.249256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.249342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.249595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.249660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.249871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.249939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.250143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.250208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.250541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.250608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.250905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.250970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.251179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.251246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.251560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.251627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.251919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.251984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.252245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.252332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.252567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.252633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.252863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.252929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.253212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.253279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.253613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.253679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.253978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.254043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.254299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.254412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.254669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.254735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.254982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.255050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.255254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.255342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.255620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.255686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.255981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.256046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.256356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.256424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.256674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.256741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.256999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.257063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.257272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.257355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.257648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.257714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.258010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.258086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.258346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.258414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.258630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.258697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.258956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.259021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.259279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.259360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.259620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.259685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.259972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.260036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.260285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.380 [2024-11-20 09:17:10.260373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.380 qpair failed and we were unable to recover it. 00:26:33.380 [2024-11-20 09:17:10.260632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.260698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.260908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.260977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.261217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.261284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.261537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.261604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.261846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.261912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.262161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.262227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.262522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.262590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.262848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.262913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.263132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.263200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.263502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.263570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.263832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.263896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.264181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.264246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.264533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.264600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.264822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.264891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.265149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.265215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.265486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.265554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.265824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.265889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.266177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.266243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.266465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.266536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.266764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.266831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.267069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.267135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.267442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.267511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.267773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.267838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.268082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.268148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.268451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.268519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.268820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.268885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.269118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.269184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.269450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.269519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.269766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.269833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.270112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.270176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.270417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.270485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.270730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.270795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.271062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.271138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.271403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.381 [2024-11-20 09:17:10.271470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.381 qpair failed and we were unable to recover it. 00:26:33.381 [2024-11-20 09:17:10.271728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.271793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.272043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.272108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.272398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.272465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.272754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.272819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.273024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.273089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.273338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.273405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.273621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.273687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.273943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.274007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.274322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.274388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.274597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.274663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.274950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.275014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.275204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.275271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.275566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.275633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.275934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.275999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.276260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.276344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.276602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.276668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.276957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.277021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.277214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.277282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.277570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.277636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.277935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.278000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.278264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.278353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.278580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.278648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.278938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.279004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.279263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.279346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.279655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.279720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.280019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.280085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.280386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.280453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.280732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.280799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.281022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.281088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.281344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.281410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.281643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.281709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.281959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.282025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.282260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.282339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.282595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.282661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.282961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.283026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.283331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.283398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.283706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.382 [2024-11-20 09:17:10.283772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.382 qpair failed and we were unable to recover it. 00:26:33.382 [2024-11-20 09:17:10.284014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.284082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.284339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.284417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.284681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.284746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.285036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.285102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.285367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.285433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.285727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.285792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.286093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.286159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.286424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.286490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.286792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.286857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.287127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.287193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.287442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.287509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.287755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.287824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.288097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.288163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.288455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.288523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.288814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.288879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.289151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.289216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.289534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.289600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.289889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.289956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.290217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.290281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.290507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.290572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.290862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.290928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.291220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.291284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.291585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.291651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.291888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.291956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.292244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.292329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.292588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.292653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.292941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.293005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.293295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.293395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.293675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.293741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.293988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.294054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.294294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.294381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.294681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.294747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.294938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.295006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.295324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.295391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.295601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.295666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.295927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.295992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.296237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.383 [2024-11-20 09:17:10.296320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.383 qpair failed and we were unable to recover it. 00:26:33.383 [2024-11-20 09:17:10.296580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.296649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.296938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.297003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.297251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.297351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.297648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.297713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.297966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.298041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.298339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.298407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.298675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.298740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.299042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.299106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.299403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.299473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.299742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.299808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.300101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.300166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.300360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.300428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.300691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.300758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.301047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.301112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.301379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.301446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.301713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.301779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.302038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.302103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.302365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.302434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.302745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.302811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.303058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.303122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.303409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.303476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.303712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.303779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.304018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.304084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.304383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.304449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.304691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.304756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.305050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.305115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.305383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.305450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.305748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.305813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.306025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.306090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.306386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.306455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.306766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.306832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.307101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.307166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.307430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.307498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.307698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.307765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.308003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.308068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.308285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.308369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.308633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.308699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.384 [2024-11-20 09:17:10.309001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.384 [2024-11-20 09:17:10.309066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.384 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.309336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.309403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.309707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.309772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.309995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.310060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.310351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.310418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.310674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.310742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.311050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.311116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.311366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.311447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.311712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.311777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.312041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.312107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.312315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.312381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.312593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.312658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.312956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.313020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.313269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.313352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.313615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.313681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.313930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.313995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.314245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.314331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.314589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.314654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.314925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.314991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.315274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.315360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.315623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.315688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.315964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.316032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.316283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.316372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.316659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.316723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.317019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.317084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.317367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.317436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.317699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.317765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.318006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.318073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.318333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.318399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.318641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.318709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.318957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.319023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.319274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.319354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.319651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.319717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.320015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.320080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.320356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.320423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.320678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.320743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.321033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.321098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.321378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.385 [2024-11-20 09:17:10.321448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.385 qpair failed and we were unable to recover it. 00:26:33.385 [2024-11-20 09:17:10.321770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.321835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.322096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.322161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.322447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.322515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.322808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.322873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.323131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.323196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.323407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.323474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.323759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.323824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.324050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.324114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.324401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.324468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.324717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.324794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.325055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.325120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.325411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.325477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.325726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.325791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.326079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.326144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.326402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.326468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.326745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.326811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.327094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.327159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.327457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.327523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.327812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.327876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.328134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.328199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.328449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.328516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.328804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.328869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.329079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.329148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.329427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.329493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.329735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.329803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.330045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.330111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.330410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.330476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.330729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.330798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.331064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.331131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.331419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.331485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.331776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.331841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.332106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.332173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.332454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.386 [2024-11-20 09:17:10.332521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.386 qpair failed and we were unable to recover it. 00:26:33.386 [2024-11-20 09:17:10.332777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.332845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.333116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.333181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.333436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.333503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.333761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.333828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.334124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.334189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.334449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.334516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.334769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.334834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.335063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.335129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.335394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.335461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.335712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.335776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.336023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.336090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.336391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.336457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.336681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.336746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.336973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.337041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.337337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.337405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.337669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.337733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.338022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.338097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.338391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.338459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.338741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.338805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.339057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.339122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.339362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.339428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.339641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.339709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.340011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.340076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.340342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.340410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.340654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.340720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.340953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.341018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.341230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.341298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.341630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.341697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.341958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.342024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.342288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.342377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.342639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.342707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.342988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.343054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.343356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.343424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.343661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.343725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.343973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.344041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.344284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.344366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.344595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.344660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.387 [2024-11-20 09:17:10.344912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.387 [2024-11-20 09:17:10.344981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.387 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.345248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.345345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.345598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.345663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.345917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.345982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.346183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.346251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.346530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.346596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.346880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.346946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.347255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.347341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.347594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.347660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.347918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.347984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.348274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.348358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.348644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.348709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.348955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.349023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.349338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.349406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.349656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.349721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.350017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.350084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.350342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.350411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.350699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.350765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.351025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.351091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.351372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.351449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.351654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.351721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.351985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.352051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.352266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.352346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.352597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.352663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.352963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.353028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.353281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.353380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.353672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.353737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.354021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.354085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.354379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.354445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.354680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.354745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.354995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.355060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.355356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.355422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.355671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.355736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.355997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.356066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.356364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.356433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.356658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.356724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.356944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.357012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.357467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.388 [2024-11-20 09:17:10.357537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.388 qpair failed and we were unable to recover it. 00:26:33.388 [2024-11-20 09:17:10.357835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.357903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.358201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.358267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.358583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.358649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.358914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.358980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.359222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.359287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.359502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.359567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.359816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.359880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.360091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.360157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.360429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.360497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.360794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.360859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.361070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.361138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.361406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.361472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.361720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.361786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.361994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.362059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.362324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.362391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.362667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.362735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.363010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.363076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.363343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.363410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.363666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.363731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.363995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.364063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.364358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.364426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.364674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.364750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.365034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.365118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.365332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.365399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.365649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.365718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.365951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.366017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.366289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.366372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.366604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.366670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.366927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.367020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.367316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.367385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.389 [2024-11-20 09:17:10.367675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.389 [2024-11-20 09:17:10.367740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.389 qpair failed and we were unable to recover it. 00:26:33.672 [2024-11-20 09:17:10.367963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-11-20 09:17:10.368029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.672 qpair failed and we were unable to recover it. 00:26:33.672 [2024-11-20 09:17:10.368263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-11-20 09:17:10.368346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.672 qpair failed and we were unable to recover it. 00:26:33.672 [2024-11-20 09:17:10.368604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.672 [2024-11-20 09:17:10.368669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.672 qpair failed and we were unable to recover it. 00:26:33.672 [2024-11-20 09:17:10.368923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.368993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.369281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.369410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.369682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.369751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.370021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.370113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.370424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.370511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.370816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.370884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.371184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.371266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.371577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.371645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.371909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.371996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.372249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.372357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.372591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.372657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.372949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.373041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.373342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.373410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.373666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.373732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.373987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.374053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.374367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.374436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.374637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.374702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.374907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.374978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.375282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.375364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.375625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.375694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.375949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.376016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.376326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.376393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.376616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.376683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.376942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.377007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.377335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.377402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.377664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.377730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.377992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.378057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.378257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.378361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.378632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.378702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.378962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.379029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.379274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.379360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.379621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.379686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.379874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.379942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.380208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.380273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.380544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.380609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.380865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.380930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.381228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.381292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.673 [2024-11-20 09:17:10.381560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.673 [2024-11-20 09:17:10.381627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.673 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.381921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.381986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.382289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.382374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.382637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.382702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.382961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.383026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.383323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.383390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.383686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.383752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.384008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.384073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.384333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.384401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.384649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.384716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.384986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.385051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.385372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.385440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.385719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.385785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.386093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.386158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.386428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.386495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.386715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.386782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.387024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.387089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.387370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.387437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.387729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.387794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.388047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.388111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.388322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.388389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.388643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.388712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.389009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.389073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.389368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.389435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.389704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.389769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.390075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.390139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.390360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.390426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.390712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.390778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.391028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.391093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.391362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.391428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.391686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.391762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.392012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.392080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.392379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.392444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.392736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.392801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.393058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.393123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.393425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.393493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.393762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.393828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.394040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.674 [2024-11-20 09:17:10.394104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.674 qpair failed and we were unable to recover it. 00:26:33.674 [2024-11-20 09:17:10.394381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.394447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.394741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.394807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.395095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.395159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.395411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.395477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.395741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.395806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.396111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.396175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.396464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.396532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.396819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.396885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.397166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.397230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.397504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.397571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.397828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.397896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.398158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.398224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.398462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.398530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.398751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.398815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.399051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.399117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.399336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.399405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.399612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.399680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.399919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.399984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.400231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.400299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.400605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.400675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.400973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.401039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.401339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.401406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.401589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.401654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.401946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.402011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.402295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.402379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.402680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.402745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.403002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.403068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.403355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.403422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.403676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.403741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.404037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.404102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.404393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.404460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.404711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.404779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.405069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.405145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.405407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.405474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.405765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.405830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.406070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.406134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.406370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.406437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.406725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.675 [2024-11-20 09:17:10.406790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.675 qpair failed and we were unable to recover it. 00:26:33.675 [2024-11-20 09:17:10.407037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.407102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.407373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.407440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.407704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.407768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.408042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.408106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.408353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.408420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.408663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.408731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.408973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.409039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.409296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.409375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.409597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.409663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.409937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.410002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.410196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.410260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.410500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.410565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.410804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.410871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.411133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.411198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.411506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.411573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.411845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.411911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.412204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.412270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.412584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.412650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.412951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.413016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.413221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.413288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.413575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.413641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.413895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.413973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.414279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.414366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.414654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.414720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.414940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.415008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.415320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.415388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.415679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.415744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.416028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.416093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.416365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.416432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.416732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.416796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.417038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.417103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.417403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.417471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.417733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.417799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.418090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.418157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.418418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.418485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.418795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.418863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.419065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.419131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.419380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.676 [2024-11-20 09:17:10.419450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.676 qpair failed and we were unable to recover it. 00:26:33.676 [2024-11-20 09:17:10.419742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.419807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.420103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.420170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.420378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.420448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.420692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.420759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.420953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.421030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.421290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.421371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.421595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.421660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.421893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.421958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.422212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.422277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.422594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.422661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.422930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.422996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.423279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.423358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.423620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.423686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.423984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.424049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.424355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.424423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.424712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.424778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.424996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.425060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.425331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.425399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.425662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.425728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.426023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.426088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.426349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.426418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.426676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.426742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.427034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.427099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.427366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.427451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.427689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.427756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.427972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.428036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.428331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.428399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.428699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.428765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.429024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.429089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.429342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.429411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.429616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.429684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.429943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.677 [2024-11-20 09:17:10.430008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.677 qpair failed and we were unable to recover it. 00:26:33.677 [2024-11-20 09:17:10.430253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.430334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.430562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.430627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.430910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.430976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.431245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.431326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.431605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.431670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.431958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.432024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.432247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.432344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.432612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.432677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.432893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.432961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.433255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.433342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.433653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.433718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.433970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.434039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.434324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.434392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.434682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.434747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.435039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.435105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.435372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.435440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.435693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.435762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.436019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.436084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.436343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.436410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.436655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.436720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.436984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.437049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.437236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.437317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.437543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.437611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.437864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.437931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.438181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.438247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.438465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.438535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.438746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.438813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.439083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.439148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.439441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.439508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.439721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.439789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.440002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.440067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.440353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.440434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.440710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.440775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.441000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.441066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.441348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.441635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.441702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.441955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.442019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.442272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.678 [2024-11-20 09:17:10.442354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.678 qpair failed and we were unable to recover it. 00:26:33.678 [2024-11-20 09:17:10.442611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.442680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.442914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.442978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.443164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.443229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.443473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.443544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.443764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.443829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.444049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.444118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.444335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.444405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.444686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.444751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.445038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.445104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.445344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.445412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.445677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.445743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.445953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.446021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.446323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.446389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.446624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.446690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.446880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.446945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.447167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.447231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.447526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.447592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.447831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.447899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.448122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.448190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.448455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.448521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.448840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.448907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.449175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.449240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.449529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.449596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.449863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.449928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.450216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.450280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.450564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.450629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.450873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.450940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.451191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.451256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.451543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.451608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.451836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.451901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.452144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.452211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.452476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.452543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.452790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.452854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.453146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.453223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.453508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.453574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.453822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.453886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.454142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.454207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.454454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.679 [2024-11-20 09:17:10.454522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.679 qpair failed and we were unable to recover it. 00:26:33.679 [2024-11-20 09:17:10.454810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.454874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.455167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.455232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.455479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.455547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.455847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.455912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.456207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.456272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.456528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.456595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.456845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.456910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.457189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.457253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.457560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.457626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.457886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.457951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.458233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.458298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.458603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.458672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.458959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.459024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.459327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.459395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.459650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.459716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.459944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.460009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.460280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.460386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.460679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.460744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.461001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.461066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.461275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.461364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.461630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.461694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.461989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.462054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.462279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.462366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.462645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.462710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.462921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.462986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.463228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.463296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.463538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.463604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.463856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.463922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.464213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.464279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.464562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.464630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.464923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.464988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.465224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.465289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.465603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.465668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.465897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.465962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.466721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.466754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.466953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.467012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.467139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.467167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.680 qpair failed and we were unable to recover it. 00:26:33.680 [2024-11-20 09:17:10.467289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.680 [2024-11-20 09:17:10.467325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.467465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.467533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.467668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.467718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.467813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.467843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.467940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.467969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.468098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.468127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.468258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.468287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.468431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.468461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.468606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.468635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.468761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.468789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.468987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.469038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.469196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.469225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.469356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.469386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.469511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.469540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.469666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.469695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.469814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.469842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.469931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.469959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.470082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.470109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.470199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.470227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.470323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.470352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.470477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.470505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.470621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.470649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.470754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.470783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.470897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.470925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.471069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.471097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.471224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.471253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.471377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.471406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.471498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.471526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.471649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.471677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.471800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.471828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.471918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.471946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.472034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.472063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.472189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.472217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.472312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.472347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.472440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.472468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.472589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.472617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.472773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.472802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.472934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.472962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.681 qpair failed and we were unable to recover it. 00:26:33.681 [2024-11-20 09:17:10.473081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.681 [2024-11-20 09:17:10.473122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.473242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.473270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.473425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.473454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.473575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.473602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.473753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.473781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.473901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.473930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.474043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.474071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.474167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.474196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.474323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.474353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.474447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.474475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.474589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.474617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.474714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.474742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.474893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.474921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.475042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.475072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.475203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.475231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.475349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.475378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.475465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.475494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.475590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.475618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.475732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.475760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.475868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.475897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.476018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.476046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.476139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.476167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.476327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.476356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.476482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.476510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.476613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.476642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.476759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.476787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.476904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.476932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.477065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.477093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.477187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.477215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.477326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.477354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.477478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.477506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.477624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.477652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.477747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.477774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.477894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.477924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.682 [2024-11-20 09:17:10.478069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.682 [2024-11-20 09:17:10.478097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.682 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.478246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.478274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.478435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.478464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.478640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.478692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.478898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.478951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.479071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.479099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.479184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.479217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.479340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.479369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.479559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.479621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.479775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.479804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.479901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.479929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.480042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.480071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.480174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.480202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.480332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.480361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.480447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.480477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.480624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.480652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.480771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.480799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.480893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.480922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.481042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.481070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.481219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.481247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.481406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.481467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.481617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.481675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.481774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.481802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.481948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.481976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.482086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.482114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.482204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.482232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.482356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.482386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.482510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.482538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.482631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.482659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.482755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.482783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.482876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.482904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.482997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.483025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.483143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.483171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.483268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.483297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.483396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.483424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.483541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.483570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.483692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.483720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.483843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.483871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.683 [2024-11-20 09:17:10.483965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.683 [2024-11-20 09:17:10.483993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.683 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.484140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.484169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.484258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.484286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.484425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.484455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.484571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.484599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.484751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.484779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.484866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.484895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.485046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.485074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.485175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.485208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.485315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.485345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.485461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.485490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.485587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.485614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.485729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.485757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.485878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.485907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.486034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.486062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.486175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.486203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.486328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.486358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.486476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.486504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.486595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.486623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.486741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.486769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.486882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.486911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.487057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.487085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.487175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.487203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.487299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.487335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.487440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.487468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.487559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.487587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.487677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.487705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.487790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.487819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.487918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.487946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.488041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.488069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.488165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.488193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.488330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.488358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.488450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.488479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.488561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.488589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.488687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.488715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.488814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.488842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.488965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.488993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.489119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.489146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.684 [2024-11-20 09:17:10.489262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.684 [2024-11-20 09:17:10.489290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.684 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.489418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.489447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.489567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.489594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.489719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.489747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.489865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.489894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.489979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.490007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.490096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.490126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.490270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.490298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.490408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.490436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.490557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.490585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.490704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.490736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.490883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.490911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.491033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.491061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.491146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.491175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.491261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.491289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.491393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.491422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.491562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.491617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.491733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.491762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.491885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.491913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.491994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.492023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.492151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.492180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.492297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.492355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.492449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.492477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.492594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.492622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.492775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.492803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.492886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.492913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.493001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.493029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.493173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.493201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.493349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.493378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.493498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.493526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.493684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.493712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.493804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.493832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.493992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.494020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.494170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.494197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.494322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.494351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.494545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.494605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.494751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.494805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.494937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.494965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.685 [2024-11-20 09:17:10.495091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.685 [2024-11-20 09:17:10.495120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.685 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.495211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.495239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.495418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.495465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.495657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.495710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.495801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.495828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.495943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.495972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.496069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.496099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.496196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.496224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.496330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.496359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.496454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.496481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.496569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.496597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.496723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.496751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.496881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.496914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.497006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.497034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.497154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.497183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.497276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.497321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.497425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.497453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.497579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.497607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.497717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.497744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.497840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.497867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.497946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.497974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.498077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.498106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.498241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.498286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.498414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.498443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.498570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.498598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.498749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.498777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.498902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.498930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.499054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.499082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.499214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.499241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.499327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.499356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.499442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.499501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.499673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.499737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.499991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.500056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.500293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.500374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.686 [2024-11-20 09:17:10.500518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.686 [2024-11-20 09:17:10.500546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.686 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.500664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.500739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.501004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.501068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.501238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.501265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.501389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.501417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.501506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.501534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.501765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.501828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.502085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.502151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.502393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.502444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.502564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.502592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.502773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.502837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.503087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.503151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.503390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.503418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.503543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.503570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.503880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.503944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.504224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.504287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.504458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.504485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.504577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.504606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.504738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.504771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.504891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.504960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.505222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.505284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.505497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.505524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.505615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.505642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.505759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.505787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.506009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.506073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.506333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.506381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.506508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.506536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.506624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.506652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.506747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.506775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.507005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.507068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.507257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.507285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.507444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.507471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.507575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.507602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.507786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.507836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.508127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.508192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.508451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.508516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.508816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.508870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.509133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.509196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.509466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.687 [2024-11-20 09:17:10.509530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.687 qpair failed and we were unable to recover it. 00:26:33.687 [2024-11-20 09:17:10.509785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.509839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.510091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.510145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.510343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.510424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.510661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.510711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.510928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.510991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.511244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.511319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.511531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.511584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.511828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.511879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.512117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.512168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.512398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.512454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.512735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.512799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.513085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.513140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.513330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.513389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.513615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.513678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.513934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.513999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.514317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.514377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.514629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.514692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.514988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.515052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.515298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.515386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.515649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.515727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.516016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.516081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.516334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.516396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.516652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.516716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.517010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.517068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.517287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.517364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.517641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.517704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.518004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.518068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.518376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.518437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.518715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.518779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.518975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.519044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.519342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.519408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.519620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.519685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.519956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.520021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.520277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.520358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.520608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.520674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.520973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.521038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.521330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.521395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.521614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.688 [2024-11-20 09:17:10.521680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.688 qpair failed and we were unable to recover it. 00:26:33.688 [2024-11-20 09:17:10.521922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.521987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.522249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.522331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.522579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.522643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.522944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.523008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.523264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.523364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.523659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.523721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.523972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.524036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.524328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.524394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.524625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.524689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.524980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.525044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.525286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.525367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.525583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.525647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.525915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.525978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.526183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.526249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.526486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.526551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.526857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.526921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.527191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.527255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.527519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.527584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.527777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.527842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.528101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.528164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.528421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.528486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.528735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.528823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.529047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.529111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.529398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.529462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.529678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.529745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.529994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.530057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.530291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.530381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.530602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.530669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.530959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.531022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.531349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.531415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.531704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.531768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.532063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.532127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.532371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.532437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.532730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.532794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.533084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.533147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.533415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.533480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.533731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.533798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.534086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.689 [2024-11-20 09:17:10.534149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.689 qpair failed and we were unable to recover it. 00:26:33.689 [2024-11-20 09:17:10.534431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.534497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.534727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.534794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.535046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.535109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.535334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.535399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.535645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.535708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.535909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.535977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.536234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.536300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.536607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.536671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.536867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.536931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.537195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.537259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.537560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.537636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.537928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.537992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.538286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.538365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.538612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.538678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.538941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.539005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.539262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.539362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.539591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.539655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.539908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.539971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.540217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.540283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.540514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.540579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.540841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.540905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.541202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.541266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.541573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.541638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.541920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.541986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.542195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.542260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.542541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.542607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.542868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.542934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.543241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.543339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.543633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.543696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.543957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.544220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.544284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.544596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.544660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.544954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.545017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.545320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.545386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.545635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.545699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.545941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.546006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.546228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.546293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.546587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.690 [2024-11-20 09:17:10.546652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.690 qpair failed and we were unable to recover it. 00:26:33.690 [2024-11-20 09:17:10.546862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.546929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.547216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.547280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.547616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.547679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.547917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.547984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.548271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.548356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.548605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.548673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.548962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.549026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.549271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.549354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.549642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.549705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.549963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.550026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.550280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.550364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.550667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.550730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.551017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.551096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.551361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.551427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.551685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.551748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.551953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.552018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.552245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.552323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.552598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.552661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.552875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.552940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.553154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.553220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.553494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.553559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.553804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.553868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.554151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.554215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.554483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.554548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.554766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.554832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.555116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.555180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.555410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.555478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.555695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.555758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.556017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.556081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.556331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.556418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.556615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.556678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.556967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.557031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.557277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.557360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.557563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.557630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.691 qpair failed and we were unable to recover it. 00:26:33.691 [2024-11-20 09:17:10.557889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.691 [2024-11-20 09:17:10.557953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.558180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.558243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.558521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.558587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.558879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.558943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.559166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.559232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.559526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.559592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.559875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.559938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.560182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.560249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.560490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.560557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.560800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.560866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.561113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.561179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.561441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.561507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.561756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.561819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.562033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.562097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.562321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.562387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.562584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.562651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.562940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.563005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.563292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.563370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.563618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.563693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.563903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.563967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.564248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.564328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.564632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.564696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.564930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.564994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.565220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.565284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.565559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.565622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.565876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.565940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.566227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.566292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.566609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.566673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.566907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.566970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.567253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.567347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.567559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.567625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.567880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.567945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.568153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.568219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.568526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.568592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.568872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.568935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.569197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.569261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.569491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.569554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.569838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.569902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.692 qpair failed and we were unable to recover it. 00:26:33.692 [2024-11-20 09:17:10.570199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.692 [2024-11-20 09:17:10.570261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.570492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.570558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.570768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.570833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.571079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.571143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.571393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.571461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.571752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.571817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.572072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.572139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.572383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.572451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.572707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.572771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.572971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.573037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.573329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.573394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.573642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.573710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.573954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.574016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.574299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.574379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.574601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.574668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.574924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.574987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.575233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.575299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.575627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.575691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.575944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.576008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.576248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.576333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.576582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.576658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.576909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.576973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.577256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.577337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.577571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.577634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.577827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.577890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.578120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.578185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.578477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.578542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.578744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.578808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.579053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.579120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.579344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.579411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.579668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.579734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.579971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.580036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.580282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.580364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.580610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.580676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.580871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.580936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.581188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.581252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.581524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.581590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.581848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.581911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.693 [2024-11-20 09:17:10.582188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.693 [2024-11-20 09:17:10.582252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.693 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.582572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.582637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.582897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.582960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.583242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.583336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.583644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.583708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.583947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.584013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.584258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.584344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.584606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.584670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.584890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.584954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.585207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.585272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.585576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.585641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.585887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.585951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.586231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.586294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.586511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.586576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.586864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.586929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.587223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.587286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.587610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.587674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.587975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.588038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.588272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.588355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.588612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.588675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.588931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.588995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.589253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.589332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.589596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.589672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.589938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.590002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.590240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.590323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.590576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.590639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.590908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.590972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.591184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.591247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.591498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.591561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.591799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.591863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.592145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.592209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.592518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.592581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.592879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.592942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.593192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.593256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.593510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.593573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.593835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.593898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.594178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.594243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.594479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.594543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.694 [2024-11-20 09:17:10.594791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.694 [2024-11-20 09:17:10.594857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.694 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.595079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.595143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.595405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.595473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.595733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.595796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.596046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.596109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.596354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.596420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.596643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.596709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.597006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.597069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.597284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.597363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.597617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.597682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.597931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.597998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.598260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.598338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.598615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.598680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.598963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.599026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.599270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.599364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.599594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.599661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.599871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.599935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.600229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.600292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.600613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.600677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.600969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.601034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.601286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.601365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.601659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.601723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.601949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.602013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.602320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.602385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.602644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.602718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.602973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.603038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.603276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.603369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.603580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.603643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.603903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.603966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.604160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.604227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.604475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.604540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.604757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.604821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.605048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.605113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.605364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.605429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.605677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.695 [2024-11-20 09:17:10.605743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.695 qpair failed and we were unable to recover it. 00:26:33.695 [2024-11-20 09:17:10.606004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.606069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.606341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.606406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.606706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.606770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.606996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.607062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.607363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.607429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.607670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.607736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.607940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.608003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.608250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.608328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.608630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.608695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.608920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.608983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.609225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.609290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.609513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.609581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.609800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.609864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.610117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.610181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.610402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.610469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.610726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.610790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.611010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.611074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.611327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.611391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.611631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.611697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.611936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.612000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.612209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.612273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.612544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.612609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.612830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.612895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.613149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.613212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.613500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.613565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.613857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.613920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.614159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.614223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.614494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.614559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.614840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.614904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.615119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.615193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.615434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.615499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.615784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.615847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.616124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.616189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.616439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.616505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.616700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.616766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.696 [2024-11-20 09:17:10.617021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.696 [2024-11-20 09:17:10.617086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.696 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.617297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.617381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.617662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.617725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.618018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.618083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.618298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.618383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.618670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.618734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.619025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.619090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.619294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.619388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.619690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.619753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.620057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.620120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.620400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.620465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.620746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.620809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.621057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.621121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.621410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.621478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.621746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.621809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.622064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.622127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.622349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.622416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.622633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.622696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.622904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.622971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.623256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.623334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.623591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.623658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.623968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.624032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.624335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.624400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.624681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.624744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.624992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.625058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.625289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.625369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.625635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.625700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.625943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.626007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.626250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.626328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.626595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.626661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.626887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.626955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.627207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.627271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.627610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.627674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.627976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.628040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.628288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.628383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.628651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.697 [2024-11-20 09:17:10.628715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.697 qpair failed and we were unable to recover it. 00:26:33.697 [2024-11-20 09:17:10.628964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.629030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.629292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.629374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.629631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.629694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.629980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.630044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.630299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.630391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.630643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.630707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.630923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.630988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.631281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.631364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.631657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.631720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.632012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.632075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.632335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.632401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.632645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.632712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.633009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.633073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.633335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.633401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.633612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.633676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.633879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.633942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.634188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.634254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.634535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.634599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.634848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.634912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.635130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.635195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.635447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.635514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.635778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.635843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.636094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.636159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.636452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.636519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.636764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.636828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.637050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.637115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.637317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.637385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.637594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.637658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.637913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.637976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.638168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.638233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.638502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.638568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.638866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.638930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.639178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.639243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.639491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.639557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.639761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.639828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.640070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.640134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.698 qpair failed and we were unable to recover it. 00:26:33.698 [2024-11-20 09:17:10.640377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.698 [2024-11-20 09:17:10.640443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.640649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.640716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.640936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.641011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.641219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.641286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.641572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.641637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.641849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.641914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.642209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.642273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.642544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.642609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.642849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.642912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.643147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.643211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.643482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.643548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.643769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.643833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.644109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.644172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.644446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.644513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.644728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.644793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.645032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.645096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.645354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.645420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.645673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.645737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.645923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.645986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.646239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.646318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.646613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.646676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.646888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.646957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.647248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.647325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.647620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.647684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.647924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.647989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.648280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.648363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.648622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.648686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.648921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.648988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.649246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.649326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.649586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.649652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.649936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.650000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.650236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.650319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.650580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.650646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.650936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.650998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.651247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.699 [2024-11-20 09:17:10.651348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.699 qpair failed and we were unable to recover it. 00:26:33.699 [2024-11-20 09:17:10.651616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.651682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.651903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.651970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.652224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.652288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.652605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.652670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.652883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.652945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.653140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.653203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.653515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.653581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.653827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.653901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.654119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.654185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.654474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.654539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.654758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.654821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.655104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.655168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.655398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.655462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.655702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.655766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.655961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.656026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.656233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.656297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.656561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.656628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.656916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.656980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.657237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.657300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.657570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.657633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.657848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.657913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.658182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.658244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.658542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.658607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.658890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.658954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.659199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.659262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.659559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.659625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.659923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.659987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.660204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.660268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.660557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.660624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.660889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.660952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.661254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.661336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.661584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.661650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.661880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.700 [2024-11-20 09:17:10.661943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.700 qpair failed and we were unable to recover it. 00:26:33.700 [2024-11-20 09:17:10.662190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.662254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.662535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.662601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.662854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.662917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.663116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.663180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.663441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.663505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.663754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.663818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.664068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.664132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.664381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.664446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.664752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.664815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.665069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.665133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.665356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.665420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.665672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.665737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.666024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.666088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.666346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.666410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.666662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.666736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.666992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.667056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.667262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.667342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.667586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.667650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.667897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.667962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.668224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.668288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.668610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.668673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.668925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.668988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.669269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.669368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.669634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.669699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.669983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.670046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.670237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.670301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.670577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.670641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.670885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.670948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.671204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.671269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.671506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.671571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.671819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.671884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.672132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.672198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.672439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.672508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.672806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.701 [2024-11-20 09:17:10.672870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.701 qpair failed and we were unable to recover it. 00:26:33.701 [2024-11-20 09:17:10.673106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.673170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.673421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.673486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.673724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.673788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.674044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.674107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.674355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.674419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.674649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.674716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.674968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.675032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.675367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.675434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.675721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.675785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.675991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.676056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.702 [2024-11-20 09:17:10.676331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.702 [2024-11-20 09:17:10.676395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.702 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.676661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.676726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.677014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.677078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.677348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.677414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.677709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.677773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.678057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.678123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.678395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.678460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.678691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.678755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.678947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.679012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.679249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.679328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.679540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.679617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.679862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.679926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.680129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.680192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.680463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.680530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.680771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.680834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.681030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.681095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.681301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.681388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.681625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.982 [2024-11-20 09:17:10.681688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.982 qpair failed and we were unable to recover it. 00:26:33.982 [2024-11-20 09:17:10.681922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.681985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.682185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.682249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.682497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.682561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.682809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.682875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.683163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.683226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.683499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.683568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.683838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.683905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.684131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.684195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.684447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.684515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.684759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.684827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.685086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.685149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.685344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.685409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.685668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.685732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.685931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.685994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.686211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.686274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.686534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.686597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.686840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.686903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.687166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.687230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.687437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.687501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.687752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.687818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.688068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.688135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.688382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.688448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.688692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.688756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.689003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.689069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.689359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.689423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.689710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.689774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.690025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.690089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.690340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.690404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.690624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.690688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.690967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.691031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.691253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.691340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.691609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.691672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.691956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.692030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.692334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.692401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.692656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.692719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.693006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.693068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.693335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.693400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.983 [2024-11-20 09:17:10.693658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.983 [2024-11-20 09:17:10.693722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.983 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.694007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.694070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.694265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.694348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.694615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.694678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.694926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.694988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.695242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.695322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.695573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.695638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.695898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.695966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.696176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.696240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.696493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.696558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.696799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.696863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.697104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.697169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.697421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.697488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.697771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.697834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.698117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.698181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.698441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.698506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.698765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.698828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.699125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.699189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.699402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.699467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.699755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.699819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.700121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.700184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.700429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.700495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.700760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.700825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.701132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.701195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.701461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.701526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.701784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.701848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.702109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.702172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.702427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.702492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.702708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.702771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.703027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.703090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.703368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.703433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.703716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.703779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.704048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.704113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.704342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.704409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.704642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.704706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.704997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.705061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.705366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.705430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.705697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.984 [2024-11-20 09:17:10.705760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.984 qpair failed and we were unable to recover it. 00:26:33.984 [2024-11-20 09:17:10.706010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.706077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.706335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.706403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.706666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.706730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.706995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.707058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.707361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.707427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.707632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.707700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.707968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.708031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.708275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.708353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.708635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.708699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.708963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.709026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.709247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.709323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.709553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.709618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.709863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.709928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.710213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.710278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.710574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.710637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.710879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.710946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.711201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.711266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.711507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.711572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.711828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.711892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.712069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.712133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.712376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.712444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.712688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.712754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.713016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.713080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.713336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.713401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.713655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.713730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.713988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.714051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.714342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.714407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.714602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.714666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.714960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.715025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.715285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.715361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.715661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.715725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.715992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.716056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.716344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.716409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.716664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.716728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.716978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.717043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.717336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.717401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.717694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.717758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.985 qpair failed and we were unable to recover it. 00:26:33.985 [2024-11-20 09:17:10.718013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.985 [2024-11-20 09:17:10.718078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.718280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.718359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.718626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.718691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.718943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.719009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.719325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.719391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.719654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.719717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.719929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.719996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.720179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.720243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.720510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.720575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.720853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.720917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.721207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.721271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.721553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.721617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.721899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.721962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.722249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.722331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.722574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.722638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.722914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.722978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.723233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.723298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.723611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.723674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.723961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.724024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.724322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.724388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.724675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.724740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.724981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.725045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.725332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.725398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.725691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.725756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.726013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.726077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.726334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.726399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.726625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.726689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.726939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.727017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.727279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.727374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.727581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.727646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.727893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.727956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.728158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.728224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.728466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.728531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.728836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.728899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.729144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.729207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.729482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.729547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.729844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.729907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.730116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.986 [2024-11-20 09:17:10.730180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.986 qpair failed and we were unable to recover it. 00:26:33.986 [2024-11-20 09:17:10.730442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.730508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.730746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.730809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.731091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.731154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.731453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.731519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.731784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.731847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.732052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.732115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.732376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.732442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.732736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.732799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.733064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.733128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.733382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.733447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.733736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.733799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.734054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.734117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.734365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.734430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.734718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.734782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.735030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.735093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.735378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.735443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.735697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.735761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.735968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.736035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.736275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.736362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.736586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.736650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.736855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.736920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.737187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.737250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.737559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.737624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.737874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.737938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.738223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.738286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.987 qpair failed and we were unable to recover it. 00:26:33.987 [2024-11-20 09:17:10.738559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.987 [2024-11-20 09:17:10.738623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.738919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.738983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.739237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.739321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.739615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.739679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.739938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.740012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.740298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.740377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.740634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.740698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.740943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.741006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.741245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.741328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.741554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.741619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.741862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.741927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.742226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.742290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.742525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.742593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.742834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.742897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.743185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.743250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.743548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.743649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.743950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.744016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.744254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.744332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.744626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.744690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.744977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.745040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.745247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.745325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.745617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.745681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.745895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.745963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.746233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.746297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.746569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.746633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.746879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.746945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.747238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.747301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.747578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.747641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.747925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.747990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.748248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.748330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.988 [2024-11-20 09:17:10.748547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.988 [2024-11-20 09:17:10.748611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.988 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.748831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.748907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.749193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.749256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.749531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.749595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.749853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.749917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.750204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.750269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.750582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.750647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.750860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.750926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.751221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.751285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.751572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.751636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.751931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.751994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.752237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.752325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.752599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.752663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.752959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.753022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.753284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.753367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.753640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.753704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.753941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.754004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.754227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.754290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.754542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.754607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.754842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.754906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.755151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.755215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.755525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.755591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.755778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.755842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.756035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.756098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.756353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.756419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.756646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.756709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.756968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.757033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.757284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.757376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.757632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.757706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.757954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.758018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.758258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.758336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.758547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.758610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.758859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.758923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.759216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.759281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.759565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.759631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.759894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.759957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.760206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.760270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.760536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.760604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.760861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.989 [2024-11-20 09:17:10.760925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.989 qpair failed and we were unable to recover it. 00:26:33.989 [2024-11-20 09:17:10.761148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.761212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.761451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.761520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.761818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.761882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.762199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.762263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.762536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.762601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.762910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.762974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.763238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.763321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.763620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.763683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.763893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.763958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.764243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.764325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.764578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.764642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.764910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.764974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.765214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.765277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.765554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.765617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.765819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.765882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.766139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.766207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.766533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.766608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.766865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.766929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.767158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.767222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.767513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.767580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.767811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.767875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.768163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.768227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.768505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.768569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.768792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.768856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.769136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.769201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.769510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.769575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.769831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.769894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.770171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.770235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.770516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.770581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.770842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.770905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.771208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.771272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.771501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.771565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.771819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.771883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.772169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.772231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.772505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.772569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.772823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.990 [2024-11-20 09:17:10.772887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.990 qpair failed and we were unable to recover it. 00:26:33.990 [2024-11-20 09:17:10.773129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.773194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.773494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.773559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.773826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.773891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.774133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.774196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.774490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.774556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.774773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.774840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.775085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.775148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.775383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.775448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.775720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.775788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.776041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.776105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.776360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.776426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.776632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.776699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.776962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.777026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.777325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.777390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.777643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.777706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.777951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.778015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.778231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.778295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.778576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.778638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.778891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.778955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.779187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.779251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.779524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.779588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.779848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.779913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.780166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.780231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.780506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.780574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.780834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.780898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.781143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.781210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.781504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.781569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.781818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.781882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.782100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.782164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.782427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.782491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.782795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.782859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.783118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.783183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.783451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.783515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.783811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.991 [2024-11-20 09:17:10.783874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.991 qpair failed and we were unable to recover it. 00:26:33.991 [2024-11-20 09:17:10.784138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.784203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.784492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.784558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.784769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.784833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.785120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.785184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.785480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.785545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.785785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.785848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.786089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.786153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.786426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.786491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.786711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.786775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.787027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.787091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.787343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.787408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.787604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.787668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.787963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.788029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.788240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.788333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.788591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.788666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.788927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.788992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.789284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.789363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.789618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.789683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.789967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.790032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.790285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.790368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.790666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.790730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.791019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.791082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.791338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.791403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.791671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.791736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.791991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.792057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.792354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.792420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.792665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.792729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.992 [2024-11-20 09:17:10.792994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.992 [2024-11-20 09:17:10.793057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.992 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.793352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.793420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.793714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.793779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.794050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.794113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.794401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.794466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.794711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.794775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.795058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.795121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.795377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.795443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.795704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.795770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.795978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.796042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.796275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.796371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.796563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.796627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.796874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.796938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.797116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.797180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.797438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.797513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.797749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.797813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.798065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.798130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.798374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.798439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.798695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.798759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.799058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.799123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.799362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.799427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.799636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.799700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.799950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.800015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.800276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.800355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.800654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.800718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.800953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.801018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.801273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.801350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.801644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.993 [2024-11-20 09:17:10.801708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.993 qpair failed and we were unable to recover it. 00:26:33.993 [2024-11-20 09:17:10.801916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.801981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.802233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.802296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.802600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.802664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.802916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.802984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.803244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.803326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.803586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.803651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.803888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.803952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.804163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.804228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.804467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.804533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.804785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.804848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.805110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.805174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.805433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.805499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.805748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.805812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.806059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.806123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.806358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.806424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.806686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.806751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.807052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.807117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.807367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.807433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.807686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.807750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.808005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.808068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.808354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.808420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.808653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.808716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.808977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.809040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.809237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.809321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.809582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.809649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.809895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.809961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.810206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.810271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.810559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.810624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.810922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.810986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.811182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.811246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.811568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.811635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.811926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.811991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.812275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.812375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.812658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.812722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.812960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.813027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.813336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.994 [2024-11-20 09:17:10.813401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.994 qpair failed and we were unable to recover it. 00:26:33.994 [2024-11-20 09:17:10.813608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.813674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.813920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.813985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.814233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.814298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.814558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.814622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.814919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.814984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.815285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.815366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.815578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.815645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.815910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.815976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.816234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.816298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.816566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.816632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.816853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.816917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.817187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.817251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.817535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.817599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.817878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.817942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.818225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.818289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.818539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.818604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.818857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.818923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.819221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.819285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.819525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.819600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.819894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.819958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.820242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.820338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.820584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.820650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.820875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.820938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.821227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.821290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.821581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.821646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.821937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.822001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.822265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.822348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.822605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.822668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.822969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.823032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.823282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.995 [2024-11-20 09:17:10.823365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.995 qpair failed and we were unable to recover it. 00:26:33.995 [2024-11-20 09:17:10.823618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.823681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.823973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.824038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.824342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.824408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.824664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.824727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.824989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.825053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.825300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.825377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.825581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.825645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.825886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.825951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.826230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.826295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.826511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.826576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.826823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.826887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.827146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.827210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.827506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.827571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.827824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.827888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.828133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.828198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.828421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.828501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.828805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.828869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.829072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.829136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.829397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.829463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.829708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.829771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.830079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.830143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.830384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.830450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.830682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.830746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.831033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.831097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.831382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.831447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.831691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.831755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.832042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.832106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.832383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.832447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.832729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.832792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.833101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.833167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.833446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.833511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.833813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.833877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.834157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.834222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.834449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.834514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.834794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.834858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.835097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.835161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.835421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.996 [2024-11-20 09:17:10.835486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.996 qpair failed and we were unable to recover it. 00:26:33.996 [2024-11-20 09:17:10.835772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.835836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.836079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.836142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.836401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.836467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.836705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.836770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.837056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.837119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.837363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.837446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.837665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.837730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.837974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.838037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.838243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.838332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.838597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.838662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.838860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.838926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.839151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.839216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.839487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.839553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.839743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.839808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.840104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.840169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.840414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.840480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.840686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.840751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.840962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.841027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.841321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.841386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.841647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.841712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.841994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.842059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.842281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.842364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.842752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.842816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.843020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.843083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.843374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.843439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.843740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.843804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.844088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.844154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.844421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.844487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.844710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.844774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.845021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.845086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.845368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.997 [2024-11-20 09:17:10.845434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.997 qpair failed and we were unable to recover it. 00:26:33.997 [2024-11-20 09:17:10.845693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.845757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.846022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.846086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.846383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.846449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.846712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.846776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.846982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.847045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.847336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.847403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.847622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.847689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.847973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.848037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.848341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.848406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.848696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.848761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.849054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.849118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.849408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.849473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.849721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.849785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.850038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.850102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.850330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.850395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.850663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.850729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.851015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.851081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.851370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.851435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.851682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.851749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.851992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.852057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.852344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.852409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.852620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.852687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.852931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.852996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.853250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.853326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.853578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.853642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.853883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.853947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.854228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.854292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.854593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.854657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.854908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.854972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.855238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.855331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.855634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.855698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.855950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.856014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.856270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.856354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.856641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.856705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.856948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.857012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.857250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.857332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.857579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.857646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.998 qpair failed and we were unable to recover it. 00:26:33.998 [2024-11-20 09:17:10.857942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.998 [2024-11-20 09:17:10.858007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.858298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.858381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.858624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.858687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.858978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.859042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.859293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.859394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.859646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.859720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.860005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.860069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.860273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.860358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.860602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.860666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.860885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.860949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.861215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.861280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.861598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.861664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.861945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.862009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.862228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.862291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.862567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.862631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.862883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.862947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.863173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.863237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.863517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.863583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.863794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.863857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.864107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.864171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.864413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.864479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.864690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.864753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.865057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.865121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.865368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.865434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.865677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.865742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.865998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.866064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.866331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.866397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.866644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.866710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.866972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.867036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.867283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.867362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.867569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.867633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.867869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.867932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.868178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.868253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.868539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.868639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.868938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.869006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.869263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.869361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.869657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.869724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:33.999 qpair failed and we were unable to recover it. 00:26:33.999 [2024-11-20 09:17:10.869986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.999 [2024-11-20 09:17:10.870052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.870331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.870402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.870675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.870742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.871030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.871096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.871354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.871422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.871683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.871750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.871986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.872051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.872335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.872401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.872657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.872723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.873033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.873099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.873357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.873426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.873681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.873747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.874000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.874067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.874284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.874365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.874588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.874656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.874944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.875011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.875294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.875378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.875594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.875662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.875919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.875986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.876253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.876337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.876595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.876661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.876871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.876938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.877203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.877269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.877551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.877620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.877873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.877941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.878220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.878287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.878604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.878670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.878868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.878936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.879220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.879286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.879559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.879627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.879849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.879915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.880159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.880225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.880499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.880566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.880772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.880840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.000 qpair failed and we were unable to recover it. 00:26:34.000 [2024-11-20 09:17:10.881099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.000 [2024-11-20 09:17:10.881166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.881459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.881546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.881797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.881865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.882133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.882199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.882436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.882502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.882775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.882841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.883071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.883135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.883355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.883424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.883671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.883735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.884036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.884100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.884401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.884468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.884766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.884832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.885087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.885152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.885414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.885481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.885696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.885764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.886041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.886106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.886374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.886445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.886700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.886766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.887005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.887069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.887335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.887404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.887637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.887703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.887951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.888018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.888320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.888387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.888606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.888675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.888944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.889009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.889269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.889354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.889619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.001 [2024-11-20 09:17:10.889685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.001 qpair failed and we were unable to recover it. 00:26:34.001 [2024-11-20 09:17:10.889917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.889981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.890280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.890384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.890657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.890723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.890942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.891007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.891249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.891336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3441662 Killed "${NVMF_APP[@]}" "$@" 00:26:34.002 [2024-11-20 09:17:10.891608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.891674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.891897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.891966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:34.002 [2024-11-20 09:17:10.892233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.892298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:34.002 [2024-11-20 09:17:10.892568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.892636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:34.002 [2024-11-20 09:17:10.892921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.892988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:34.002 [2024-11-20 09:17:10.893276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.893364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.002 [2024-11-20 09:17:10.893593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.893662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.893927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.893994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.894250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.894337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.894589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.894654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.894919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.894986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.895289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.895380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.895507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.895541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.895675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.895709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.895881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.895915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.896048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.896081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.896189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.896224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.896446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.896481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.896593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.896630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.896771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.896807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.896998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.897035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.897193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.897226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.897336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.897371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.002 qpair failed and we were unable to recover it. 00:26:34.002 [2024-11-20 09:17:10.897485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.002 [2024-11-20 09:17:10.897519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3442209 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:34.003 [2024-11-20 09:17:10.897646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3442209 00:26:34.003 [2024-11-20 09:17:10.897680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3442209 ']' 00:26:34.003 [2024-11-20 09:17:10.897925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.003 [2024-11-20 09:17:10.897993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:34.003 [2024-11-20 09:17:10.898212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.003 [2024-11-20 09:17:10.898265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:34.003 [2024-11-20 09:17:10.898434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.898469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 09:17:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.898611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.898661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.898827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.898876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.899040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.899092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.899320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.899354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.899468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.899501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.899691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.899751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.899928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.899995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.900238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.900322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.900491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.900525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.900669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.900703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.900949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.901015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.901320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.901385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.901536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.901569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.901686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.901719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.901968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.902035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.902275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.902373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.902492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.902526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.902673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.902707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.902877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.902947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.903191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.903265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.903397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.903431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.903565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.903598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.903702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.903735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.903934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.903998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.904273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.904337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.904451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.904485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.904630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.904663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.904778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.904858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.003 [2024-11-20 09:17:10.905122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.003 [2024-11-20 09:17:10.905188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.003 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.905435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.905470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.905591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.905670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.905912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.905993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.906202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.906235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.906358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.906392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.906533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.906568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.906743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.906812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.907059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.907124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.907310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.907345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.907461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.907494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.907612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.907647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.907854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.907919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.908189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.908245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.908393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.908426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.908534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.908568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.908668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.908702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.908939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.909006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.909293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.909365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.909469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.909505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.909622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.909655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.909796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.909849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.909993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.910028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.910195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.910231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.910408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.910443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.910588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.910621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.910767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.910802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.910980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.911046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.911257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.911292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.911419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.911452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.911580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.911654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.911878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.911914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.912062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.912097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.912382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.912416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.912562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.912595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.912696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.912729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.912892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.912941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.004 [2024-11-20 09:17:10.913197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.004 [2024-11-20 09:17:10.913247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.004 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.913410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.913445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.913560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.913600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.913820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.913855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.913997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.914030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.914152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.914186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.914322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.914356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.914471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.914505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.914653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.914686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.914965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.915039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.915267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.915301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.915452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.915486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.915611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.915644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.915863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.915897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.916031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.916065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.916212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.916284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.916465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.916500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.916600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.916638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.916807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.916841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.917020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.917054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.917167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.917199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.917320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.917353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.917499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.917533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.917659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.917693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.917837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.917872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.918040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.918074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.918178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.918211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.918321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.918355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.918501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.918534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.918703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.918754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.918941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.919010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.919300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.919395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.005 [2024-11-20 09:17:10.919546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.005 [2024-11-20 09:17:10.919595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.005 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.919850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.919915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.920208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.920274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.920524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.920574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.920786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.920851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.921092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.921159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.921406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.921457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.921681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.921746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.922053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.922103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.922237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.922271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.922476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.922517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.922649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.922682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.922820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.922855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.923062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.923127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.923416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.923465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.923696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.923748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.923926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.923962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.924081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.924116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.924247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.924281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.924401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.924435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.924548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.924583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.924690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.924723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.924844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.924878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.925045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.925079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.925222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.925257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.925417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.925452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.925576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.925621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.925796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.925830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.925945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.925980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.926120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.926155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.926377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.926411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.926521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.926555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.926709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.926743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.006 [2024-11-20 09:17:10.926895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.006 [2024-11-20 09:17:10.926929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.006 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.927034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.927067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.927203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.927236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.927383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.927437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.927601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.927665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.927916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.927981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.928273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.928355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.928635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.928701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.928991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.929054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.929344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.929379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.929547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.929581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.929860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.929922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.930142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.930191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.930427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.930476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.930715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.930780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.931043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.931108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.931334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.931383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.931581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.931671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.931864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.931916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.932027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.932061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.932178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.932213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.932425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.932475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.932626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.932702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.932937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.932970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.933128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.933161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.933381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.933430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.007 [2024-11-20 09:17:10.933653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.007 [2024-11-20 09:17:10.933717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.007 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.933983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.934051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.934314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.934348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.934516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.934577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.934842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.934906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.935207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.935272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.935530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.935578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.935768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.935847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.936091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.936155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.936420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.936485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.936741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.936804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.937085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.937132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.937294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.937392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.937636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.937670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.937804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.937869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.938132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.938198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.938460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.938494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.938610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.938643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.938811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.938850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.939016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.939049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.939152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.939185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.939286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.939329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.939472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.939506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.939648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.939681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.939856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.939904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.940122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.940186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.940431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.940497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.008 [2024-11-20 09:17:10.940708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.008 [2024-11-20 09:17:10.940743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.008 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.940860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.940894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.941038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.941071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.941245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.941330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.941625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.941689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.941948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.942013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.942226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.942289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.942593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.942626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.942772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.942806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.942917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.942950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.943138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.943203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.943448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.943516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.943797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.943845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.944105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.944170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.944395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.944461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.944744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.944808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.945104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.945168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.945380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.945447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.945715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.945782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.946035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.946106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.946351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.946386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.946523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.946557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.946711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.946761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.946975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.947039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.947284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.947363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.947624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.947687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.947944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.948010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.948252] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:26:34.009 [2024-11-20 09:17:10.948320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.948357] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.009 [2024-11-20 09:17:10.948385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.948624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.948656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.948799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.948830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.949055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.949117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.949368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.949435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.949647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.949713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.950006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.950071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.950335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.009 [2024-11-20 09:17:10.950400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.009 qpair failed and we were unable to recover it. 00:26:34.009 [2024-11-20 09:17:10.950616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.950684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.950977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.951026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.951187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.951235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.951523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.951574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.951740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.951788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.951929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.951977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.952163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.952244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.952504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.952571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.952832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.952871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.952983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.953016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.953119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.953155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.953339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.953405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.953648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.953712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.953996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.954061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.954258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.954335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.954596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.954661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.954951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.955000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.955243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.955277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.955392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.955426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.955577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.955626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.955863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.955928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.956184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.956248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.956535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.956603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.956828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.956892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.957130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.957195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.957461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.957529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.957735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.957805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.958038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.958072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.958217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.958250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.958483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.958517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.958624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.958657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.958821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.958850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.958944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.010 [2024-11-20 09:17:10.958973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.010 qpair failed and we were unable to recover it. 00:26:34.010 [2024-11-20 09:17:10.959072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.959101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.959274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.959349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.959460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.959489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.959591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.959620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.959724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.959754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.959878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.959913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.960037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.960067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.960231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.960261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.960360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.960390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.960523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.960552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.960679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.960709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.960829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.960858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.960982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.961012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.961259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.961292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.961495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.961524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.961626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.961661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.961788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.961818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.961968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.961997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.962147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.962177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.962371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.962402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.962519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.962548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.962665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.962695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.962830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.962860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.963006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.963036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.963175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.963206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.963347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.963379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.963524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.963555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.963691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.963723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.963843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.963874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.964008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.964039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.964200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.964234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.964367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.964399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.011 [2024-11-20 09:17:10.964533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.011 [2024-11-20 09:17:10.964564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.011 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.964663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.964694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.964849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.964879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.965007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.965037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.965187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.965219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.965347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.965379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.965476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.965509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.965681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.965713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.965876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.965909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.966052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.966087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.966199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.966239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.966386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.966418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.966557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.966589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.966684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.966718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.966853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.966886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.966983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.967016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.967115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.967148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.967254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.967287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.967407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.967439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.967538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.967570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.967696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.967728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.967889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.967921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.968056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.968089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.968290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.968383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.968545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.968580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.968726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.968760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.968901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.968935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.969107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.969141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.969315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.969349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.969488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.969522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.969658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.969693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.969832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.969865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.970033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.970067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.970182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.970218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.970364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.970399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.970569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.970604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.970775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.970810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.012 qpair failed and we were unable to recover it. 00:26:34.012 [2024-11-20 09:17:10.970964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.012 [2024-11-20 09:17:10.970998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.971168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.971202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.971340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.971375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.971565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.971600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.971725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.971759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.971882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.971916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.972104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.972139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.972321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.972357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.972497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.972532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.972675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.972711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.972854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.972888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.973037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.973072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.973235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.973299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.973494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.973529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.973694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.973733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.973929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.973963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.974094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.974128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.974324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.974362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.974480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.974517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.974713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.974746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.974880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.974913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.975075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.975112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.975259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.975296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.975459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.975496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.975615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.975652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.975795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.975831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.976008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.976051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.976275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.976365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.976527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.976564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.976722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.976759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.976949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.976982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.977089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.977123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.977264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.977320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.977463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.977501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.977664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.977699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.977843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.977877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.978047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.978080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.013 [2024-11-20 09:17:10.978196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.013 [2024-11-20 09:17:10.978230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.013 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.978383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.978423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.978623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.978662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.978854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.978888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.979037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.979071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.979312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.979351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.979506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.979544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.979707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.979747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.979936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.979974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.980125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.980164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.980325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.980365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.980521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.980560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.980697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.980737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.980901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.980940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.981093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.981135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.981426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.981468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.981614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.981656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.981827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.981868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.982041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.982085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.982232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.982266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.982461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.982513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.982698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.982732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.982840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.982873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.982973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.983007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.983111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.983144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.983287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.983338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.983534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.983575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.983723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.983765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.983901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.983943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.984124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.984164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.984272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.984313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.984435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.984471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.984672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.984715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.984900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.984968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.985199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.985241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.985425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.985484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.985622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.985669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.985851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.985896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.014 [2024-11-20 09:17:10.986081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.014 [2024-11-20 09:17:10.986116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.014 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.986233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.015 [2024-11-20 09:17:10.986269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.015 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.986412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.015 [2024-11-20 09:17:10.986458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.015 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.986637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.015 [2024-11-20 09:17:10.986680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.015 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.986882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.015 [2024-11-20 09:17:10.986925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.015 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.987104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.015 [2024-11-20 09:17:10.987148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.015 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.987362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.015 [2024-11-20 09:17:10.987416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.015 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.987510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.015 [2024-11-20 09:17:10.987537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.015 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.987635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.015 [2024-11-20 09:17:10.987662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.015 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.987777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.015 [2024-11-20 09:17:10.987820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.015 qpair failed and we were unable to recover it. 00:26:34.015 [2024-11-20 09:17:10.987946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.987974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.988064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.988091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.988210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.988237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.988363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.988391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.988536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.988562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.988702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.988732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.988869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.988899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.989037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.989064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.989185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.989213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.989317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.989345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.989472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.989505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.989640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.989667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.989760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.989787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.989900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.989927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.990068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.990108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.990206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.990234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.990366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.990394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.990491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.990518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.990629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.990656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.990777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.990805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.990899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.990926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.991014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.991046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.991141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.991169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.991258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.991284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.991426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.991453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.991573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.991600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.991711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.991739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.991826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.991853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.991970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.991999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.992097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.992123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.992242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.285 [2024-11-20 09:17:10.992268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.285 qpair failed and we were unable to recover it. 00:26:34.285 [2024-11-20 09:17:10.992395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.992422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.992535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.992560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.992665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.992691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.992812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.992839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.992929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.992956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.993069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.993096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.993177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.993204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.993321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.993348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.993475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.993501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.993613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.993639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.993729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.993755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.993840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.993867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.993980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.994009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.994096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.994135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.994236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.994263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.994367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.994395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.994509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.994537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.994651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.994690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.994815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.994842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.994954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.994980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.995099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.995124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.995211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.995237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.995356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.995384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.995499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.995525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.995633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.995672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.995790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.995817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.995938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.995966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.996090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.996117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.996236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.996263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.996389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.996416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.996501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.996528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.996687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.996714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.996828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.996855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.996973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.997001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.997113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.997140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.997254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.997282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.286 [2024-11-20 09:17:10.997403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.286 [2024-11-20 09:17:10.997429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.286 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.997540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.997567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.997683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.997710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.997824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.997850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.997925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.997951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.998060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.998086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.998179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.998209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.998361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.998388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.998484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.998510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.998619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.998644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.998733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.998759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.998876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.998903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.999019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.999047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.999134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.999160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.999290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.999338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.999434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.999462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.999555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.999581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.999668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.999694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.999787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.999815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:10.999907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:10.999933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.000069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.000095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.000185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.000217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.000348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.000388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.000509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.000537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.000634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.000660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.000773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.000800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.000885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.000914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.001021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.001048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.001168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.001195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.001291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.001325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.001444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.001472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.001562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.001588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.001697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.001724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.001814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.001841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.001923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.001950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.002044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.002072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.002170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.002196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.002276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.002307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.287 qpair failed and we were unable to recover it. 00:26:34.287 [2024-11-20 09:17:11.002422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.287 [2024-11-20 09:17:11.002448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.002539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.002565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.002648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.002674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.002764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.002790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.002906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.002933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.003014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.003040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.003154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.003182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.003261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.003288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.003389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.003428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.003524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.003551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.003659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.003691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.003780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.003806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.003887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.003912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.004031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.004057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.004173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.004201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.004284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.004315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.004429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.004454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.004566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.004592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.004705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.004732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.004816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.004842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.004927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.004953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.005072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.005102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.005240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.005266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.005355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.005382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.005502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.005528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.005624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.005649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.005764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.005790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.005907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.005932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.006026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.006053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.006175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.006202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.006298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.006335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.006422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.006449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.006538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.006564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.006651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.006677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.006801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.006829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.006949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.006975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.007073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.007099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.288 [2024-11-20 09:17:11.007197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.288 [2024-11-20 09:17:11.007237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.288 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.007349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.007377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.007471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.007497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.007582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.007608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.007692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.007717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.007858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.007884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.007968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.007994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.008079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.008104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.008213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.008239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.008327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.008353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.008435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.008460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.008552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.008578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.008689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.008714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.008827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.008853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.008939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.008964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.009049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.009078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.009209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.009249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.009360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.009400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.009497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.009524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.009643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.009670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.009758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.009786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.009876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.009903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.009983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.010008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.010096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.010122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.010204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.010229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.010367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.010393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.010478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.010508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.010609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.010636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.010722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.010748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.010829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.010855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.010967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.010993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.011102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.011128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.011237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.289 [2024-11-20 09:17:11.011264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.289 qpair failed and we were unable to recover it. 00:26:34.289 [2024-11-20 09:17:11.011362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.011389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.011503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.011529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.011650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.011675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.011764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.011788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.011873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.011898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.012044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.012071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.012199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.012239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.012348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.012381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.012464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.012490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.012570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.012596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.012681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.012707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.012821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.012848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.012935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.012961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.013063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.013102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.013231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.013260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.013386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.013413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.013502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.013528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.013651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.013677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.013765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.013790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.013879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.013906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.013991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.014017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.014131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.014157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.014239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.014265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.014372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.014402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.014490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.014515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.014624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.014650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.014765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.014791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.014899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.014925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.015005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.015031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.015143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.015171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.015265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.015294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.015428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.015456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.015537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.015564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.015649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.015674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.015752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.015783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.015897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.015924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.016008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.016033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.016139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.290 [2024-11-20 09:17:11.016164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.290 qpair failed and we were unable to recover it. 00:26:34.290 [2024-11-20 09:17:11.016250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.016276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.016370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.016396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.016474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.016500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.016588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.016613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.016722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.016747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.016838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.016863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.016942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.016968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.017079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.017105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.017196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.017235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.017390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.017420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.017515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.017540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.017628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.017654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.017769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.017795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.017910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.017935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.018017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.018044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.018126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.018156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.018256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.018294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.018428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.018456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.018570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.018596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.018682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.018708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.018795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.018822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.018932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.018959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.019087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.019126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.019245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.019274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.019368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.019393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.019503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.019529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.019625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.019650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.019762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.019788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.019869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.019895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.020018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.020048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.020138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.020165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.020254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.020280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.020379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.020406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.020491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.020517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.020594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.020620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.020700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.020731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.020808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.020834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.020938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.291 [2024-11-20 09:17:11.020967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.291 qpair failed and we were unable to recover it. 00:26:34.291 [2024-11-20 09:17:11.021096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.021124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.021213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.021239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.021347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.021374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.021462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.021489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.021565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.021591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.021676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.021702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.021819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.021845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.021935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.021960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.022076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.022103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.022216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.022242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.022381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.022420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.022551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.022580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.022713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.022742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.022831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.022857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.022968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.022994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.023106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.023133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.023219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.023245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.023370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.023397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.023544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.023573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.023687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.023714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.023800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.023827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.023916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.023942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.024032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.024058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.024147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.024174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.024263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.024291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.024414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.024446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.024554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.024580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.024668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.024696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.024834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.024861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.024950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.024978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.025100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.025127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.025347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.025386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.025485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.025512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.025624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.025663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.025787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.025814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.025931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.025957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.026049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.026075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.292 qpair failed and we were unable to recover it. 00:26:34.292 [2024-11-20 09:17:11.026181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.292 [2024-11-20 09:17:11.026221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.026390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.026429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.026529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.026557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.026639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.026666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.026753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.026779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.026862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.026888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.026969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.026997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.027097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.027128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.027217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.027243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.027370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.027399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.027512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.027538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.027659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.027686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.027798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.027825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.027911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.027937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.028051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.028078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.028105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.293 [2024-11-20 09:17:11.028202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.028229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.028331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.028358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.028471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.028497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.028620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.028647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.028788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.028814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.028926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.028952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.029065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.029092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.029212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.029238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.029379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.029419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.029546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.029573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.029663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.029689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.029805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.029831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.029947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.029972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.030068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.030107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.030225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.030253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.030380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.030408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.030493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.030520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.030634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.030661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.030746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.030772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.030859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.030886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.030987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.031016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.031109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.031135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.031222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.293 [2024-11-20 09:17:11.031248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.293 qpair failed and we were unable to recover it. 00:26:34.293 [2024-11-20 09:17:11.031337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.031364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.031447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.031473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.031557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.031583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.031663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.031700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.031789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.031814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.031898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.031925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.032053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.032092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.032233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.032272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.032386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.032416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.032548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.032575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.032659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.032685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.032777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.032803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.032901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.032928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.033023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.033050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.033170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.033197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.033283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.033317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.033407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.033434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.033551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.033578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.033711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.033738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.033829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.033856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.033936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.033962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.034052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.034080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.034208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.034247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.034367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.034407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.034501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.034528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.034631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.034659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.034747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.034773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.034891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.034917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.035031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.035057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.035156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.035195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.035330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.294 [2024-11-20 09:17:11.035363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.294 qpair failed and we were unable to recover it. 00:26:34.294 [2024-11-20 09:17:11.035459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.035487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.035613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.035640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.035731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.035758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.035844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.035871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.035985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.036012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.036097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.036123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.036231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.036271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.036376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.036406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.036501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.036527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.036645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.036671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.036761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.036787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.036881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.036908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.036995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.037023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.037119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.037159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.037251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.037280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.037426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.037454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.037596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.037622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.037749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.037775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.037886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.037912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.038058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.038086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.038199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.038226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.038359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.038400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.038501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.038529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.038629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.038657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.038777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.038804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.038924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.038952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.039047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.039073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.039163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.039189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.039278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.039324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.039459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.039498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.039606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.039635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.039764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.039790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.039906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.039932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.040050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.040076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.040163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.040189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.040276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.040317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.040441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.040467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.040559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.295 [2024-11-20 09:17:11.040585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.295 qpair failed and we were unable to recover it. 00:26:34.295 [2024-11-20 09:17:11.040706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.040733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.040853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.040885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.040998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.041026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.041162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.041189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.041286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.041336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.041444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.041484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.041623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.041662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.041788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.041815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.041904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.041930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.042021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.042047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.042163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.042190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.042280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.042328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.042430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.042458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.042576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.042612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.042730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.042756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.042877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.042904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.042991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.043017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.043106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.043132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.043248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.043279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.043420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.043446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.043557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.043583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.043721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.043747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.043832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.043859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.043976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.044002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.044088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.044114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.044203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.044229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.044358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.044384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.044469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.044495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.044594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.044632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.044727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.044753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.044857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.044897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.045035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.045074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.045219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.045249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.045385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.045412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.045528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.045554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.045651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.045677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.045815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.045841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.045936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.296 [2024-11-20 09:17:11.045966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.296 qpair failed and we were unable to recover it. 00:26:34.296 [2024-11-20 09:17:11.046090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.046119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.046253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.046307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.046431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.046460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.046571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.046607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.046732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.046759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.046880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.046907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.047027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.047055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.047179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.047218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.047322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.047350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.047441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.047468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.047557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.047583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.047676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.047702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.047840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.047867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.047984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.048010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.048111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.048152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.048300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.048344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.048457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.048485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.048577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.048604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.048698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.048725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.048847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.048873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.048963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.048990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.049083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.049111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.049202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.049228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.049336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.049365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.049479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.049507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.049592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.049624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.049739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.049765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.049861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.049888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.049990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.050030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.050123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.050150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.050243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.050274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.050412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.050440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.050551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.050578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.050665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.050691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.050808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.050833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.050973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.051000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.051113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.051139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.051249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.051276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.297 qpair failed and we were unable to recover it. 00:26:34.297 [2024-11-20 09:17:11.051389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.297 [2024-11-20 09:17:11.051419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.051546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.051586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.051705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.051733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.051825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.051852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.051963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.051990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.052144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.052183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.052315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.052343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.052435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.052463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.052581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.052616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.052731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.052758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.052894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.052922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.053038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.053065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.053163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.053203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.053321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.053350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.053464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.053490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.053579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.053616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.053731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.053757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.053852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.053879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.053974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.054001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.054127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.054166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.054316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.054345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.054460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.054487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.054574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.054600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.054692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.054718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.054808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.054835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.054921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.054947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.055031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.055056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.055142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.055169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.055283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.055323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.055444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.055471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.055611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.055638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.055723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.055750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.055833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.055860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.055982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.056009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.056115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.056154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.056294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.056330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.056419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.298 [2024-11-20 09:17:11.056445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.298 qpair failed and we were unable to recover it. 00:26:34.298 [2024-11-20 09:17:11.056555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.056582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.056679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.056704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.056821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.056849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.056970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.056996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.057114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.057140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.057231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.057257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.057365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.057392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.057480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.057505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.057625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.057651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.057743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.057769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.057850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.057876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.057963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.057989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.058127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.058152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.058253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.058293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.058425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.058454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.058561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.058611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.058703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.058731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.058853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.058880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.059001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.059028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.059181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.059209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.059296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.059328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.059447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.059473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.059556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.059582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.059679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.059705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.059819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.059844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.059929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.059956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.060066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.060105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.060196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.060224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.060352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.060381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.060499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.060527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.060622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.060649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.060762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.060789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.060899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.060925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.061015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.061044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.061136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.061164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.299 qpair failed and we were unable to recover it. 00:26:34.299 [2024-11-20 09:17:11.061258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.299 [2024-11-20 09:17:11.061285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.061426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.061453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.061540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.061566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.061658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.061684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.061807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.061836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.061968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.061994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.062091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.062119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.062240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.062267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.062379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.062419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.062542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.062571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.062668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.062695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.062843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.062870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.062958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.062985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.063103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.063131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.063237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.063282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.063403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.063430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.063525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.063552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.063699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.063726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.063838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.063865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.063946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.063975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.064065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.064093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.064191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.064218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.064339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.064366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.064457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.064483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.064624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.064664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.064754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.064781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.064874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.064900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.065010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.065036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.065180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.065206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.065310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.065337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.065427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.065454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.065561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.065587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.065704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.065730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.065846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.065873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.065987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.066013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.066122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.066162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.066281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.066324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.066415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.300 [2024-11-20 09:17:11.066441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.300 qpair failed and we were unable to recover it. 00:26:34.300 [2024-11-20 09:17:11.066532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.066558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.066646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.066671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.066748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.066774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.066890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.066923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.067016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.067042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.067135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.067160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.067275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.067300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.067396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.067427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.067554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.067598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.067699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.067727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.067811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.067837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.067948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.067975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.068067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.068095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.068181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.068207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.068290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.068327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.068434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.068473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.068569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.068597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.068693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.068718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.068854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.068880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.068966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.068993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.069105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.069131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.069242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.069269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.069364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.069391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.069470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.069495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.069585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.069611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.069692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.069718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.069815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.069843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.069932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.069960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.070057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.070097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.070190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.070217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.070317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.070348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.070434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.070461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.070542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.070569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.070663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.070690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.070809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.070837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.070952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.070979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.071066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.071094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.071207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.301 [2024-11-20 09:17:11.071233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.301 qpair failed and we were unable to recover it. 00:26:34.301 [2024-11-20 09:17:11.071357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.071384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.071474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.071500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.071620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.071647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.071734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.071760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.071875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.071903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.071992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.072025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.072123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.072163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.072262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.072290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.072447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.072475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.072559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.072585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.072706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.072732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.072819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.072846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.072958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.072985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.073091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.073130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.073227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.073255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.073347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.073374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.073459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.073485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.073579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.073613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.073691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.073720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.073825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.073853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.073959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.073987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.074069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.074095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.074178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.074205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.074291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.074327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.074471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.074498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.074578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.074615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.074732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.074760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.074844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.074872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.074990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.075016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.075123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.075162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.075255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.075282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.075384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.075412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.075505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.075533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.075627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.075653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.075775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.075801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.075883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.075910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.076030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.076057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.076176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.076205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.302 qpair failed and we were unable to recover it. 00:26:34.302 [2024-11-20 09:17:11.076293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.302 [2024-11-20 09:17:11.076327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.076427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.076454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.076540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.076566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.076657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.076682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.076799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.076824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.076919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.076947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.077067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.077096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.077189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.077219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.077357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.077385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.077501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.077526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.077647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.077672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.077756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.077782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.077896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.077923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.078011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.078039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe96c000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.078155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.078183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.078273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.078308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.078392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.078419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.078530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.078556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.078678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.078704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.078782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.078809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.078920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.078946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.079042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.079069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.079149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.079175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.079260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.079286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.079384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.079410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.079500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.079526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.079640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.079666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.079769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.079795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.079873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.079898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.080015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.080041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.080115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.080141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.080223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.080252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.303 [2024-11-20 09:17:11.080382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.303 [2024-11-20 09:17:11.080409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.303 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.080520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.080547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.080666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.080701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.080827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.080853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.080933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.080960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.081052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.081077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.081196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.081224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.081324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.081351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.081438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.081464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.081560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.081589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.081679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.081705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.081793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.081819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.081905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.081931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.082014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.082040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.082176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.082202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.082288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.082320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.082411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.082438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.082526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.082553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.082695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.082723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.082806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.082833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.082972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.082999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.083085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.083113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fffa0 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 A controller has encountered a failure and is being reset. 00:26:34.304 [2024-11-20 09:17:11.083224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.083252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.083341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.083368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.083454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.083480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.083591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.083617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.083734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.083761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.083846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.083872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.083961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.083986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.084074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.084101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.084223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.084249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.084361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.084388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.084500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.084526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.084661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.084686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.084781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.084807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.084933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.084959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.085048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.085074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.085154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.085179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.304 [2024-11-20 09:17:11.085270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.304 [2024-11-20 09:17:11.085296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.304 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.085414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.085440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.085519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.085545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.085664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.085692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.085777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.085808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.085893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.085919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.086012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.086038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.086117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.086142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.086285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.086329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.086411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.086438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe974000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.086536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.086565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.086650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.086677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.086789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.086815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.086905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.086931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.087036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.087063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.087154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.087181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.087292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.087327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.087422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.087449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.087572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.087598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.087686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.087712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.087835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.087862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.087977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.088003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.088078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.088105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.088192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.088219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe968000b90 with addr=10.0.0.2, port=4420 00:26:34.305 qpair failed and we were unable to recover it. 00:26:34.305 [2024-11-20 09:17:11.088367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.305 [2024-11-20 09:17:11.088406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190df30 with addr=10.0.0.2, port=4420 00:26:34.305 [2024-11-20 09:17:11.088426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190df30 is same with the state(6) to be set 00:26:34.305 [2024-11-20 09:17:11.088452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190df30 (9): Bad file descriptor 00:26:34.305 [2024-11-20 09:17:11.088471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:34.305 [2024-11-20 09:17:11.088486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:34.305 [2024-11-20 09:17:11.088503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:34.305 Unable to reset the controller. 00:26:34.305 [2024-11-20 09:17:11.091060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.305 [2024-11-20 09:17:11.091096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.305 [2024-11-20 09:17:11.091111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.305 [2024-11-20 09:17:11.091122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.305 [2024-11-20 09:17:11.091132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.305 [2024-11-20 09:17:11.092820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:34.305 [2024-11-20 09:17:11.092926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:34.305 [2024-11-20 09:17:11.093017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:34.305 [2024-11-20 09:17:11.093025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.305 Malloc0 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.305 [2024-11-20 09:17:11.275103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.305 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.306 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.306 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:34.306 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.306 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.306 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.306 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.306 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.306 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.563 [2024-11-20 09:17:11.303373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.563 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.563 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:34.563 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.563 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.563 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.563 09:17:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3441690 00:26:35.495 Controller properly reset. 00:26:40.751 Initializing NVMe Controllers 00:26:40.751 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:40.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:40.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:40.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:40.751 Initialization complete. Launching workers. 00:26:40.751 Starting thread on core 1 00:26:40.751 Starting thread on core 2 00:26:40.751 Starting thread on core 3 00:26:40.751 Starting thread on core 0 00:26:40.751 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:40.751 00:26:40.751 real 0m10.655s 00:26:40.751 user 0m34.184s 00:26:40.752 sys 0m7.170s 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.752 ************************************ 00:26:40.752 END TEST nvmf_target_disconnect_tc2 00:26:40.752 ************************************ 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:40.752 rmmod nvme_tcp 00:26:40.752 rmmod nvme_fabrics 00:26:40.752 rmmod nvme_keyring 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3442209 ']' 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3442209 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3442209 ']' 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3442209 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3442209 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3442209' 00:26:40.752 killing process with pid 3442209 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3442209 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3442209 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.752 09:17:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.659 09:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:42.659 00:26:42.659 real 0m15.659s 00:26:42.659 user 0m59.871s 00:26:42.659 sys 0m9.621s 00:26:42.659 09:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:42.659 09:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:42.659 ************************************ 00:26:42.659 END TEST nvmf_target_disconnect 00:26:42.659 ************************************ 00:26:42.659 09:17:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:42.659 00:26:42.659 real 5m6.702s 00:26:42.659 user 11m4.277s 00:26:42.659 sys 1m16.995s 00:26:42.659 09:17:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:42.659 09:17:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.659 ************************************ 00:26:42.659 END TEST nvmf_host 00:26:42.659 ************************************ 00:26:42.659 09:17:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:42.659 09:17:19 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:42.659 09:17:19 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:42.659 09:17:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:42.659 09:17:19 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:42.659 09:17:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:42.659 ************************************ 00:26:42.659 START TEST nvmf_target_core_interrupt_mode 00:26:42.659 ************************************ 00:26:42.659 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:42.659 * Looking for test storage... 00:26:42.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:42.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.918 --rc genhtml_branch_coverage=1 00:26:42.918 --rc genhtml_function_coverage=1 00:26:42.918 --rc genhtml_legend=1 00:26:42.918 --rc geninfo_all_blocks=1 00:26:42.918 --rc geninfo_unexecuted_blocks=1 00:26:42.918 00:26:42.918 ' 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:42.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.918 --rc genhtml_branch_coverage=1 00:26:42.918 --rc genhtml_function_coverage=1 00:26:42.918 --rc genhtml_legend=1 00:26:42.918 --rc geninfo_all_blocks=1 00:26:42.918 --rc geninfo_unexecuted_blocks=1 00:26:42.918 00:26:42.918 ' 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:42.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.918 --rc genhtml_branch_coverage=1 00:26:42.918 --rc genhtml_function_coverage=1 00:26:42.918 --rc genhtml_legend=1 00:26:42.918 --rc geninfo_all_blocks=1 00:26:42.918 --rc geninfo_unexecuted_blocks=1 00:26:42.918 00:26:42.918 ' 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:42.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.918 --rc genhtml_branch_coverage=1 00:26:42.918 --rc genhtml_function_coverage=1 00:26:42.918 --rc genhtml_legend=1 00:26:42.918 --rc geninfo_all_blocks=1 00:26:42.918 --rc geninfo_unexecuted_blocks=1 00:26:42.918 00:26:42.918 ' 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:42.918 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:42.919 ************************************ 00:26:42.919 START TEST nvmf_abort 00:26:42.919 ************************************ 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:42.919 * Looking for test storage... 00:26:42.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.919 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:42.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.920 --rc genhtml_branch_coverage=1 00:26:42.920 --rc genhtml_function_coverage=1 00:26:42.920 --rc genhtml_legend=1 00:26:42.920 --rc geninfo_all_blocks=1 00:26:42.920 --rc geninfo_unexecuted_blocks=1 00:26:42.920 00:26:42.920 ' 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:42.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.920 --rc genhtml_branch_coverage=1 00:26:42.920 --rc genhtml_function_coverage=1 00:26:42.920 --rc genhtml_legend=1 00:26:42.920 --rc geninfo_all_blocks=1 00:26:42.920 --rc geninfo_unexecuted_blocks=1 00:26:42.920 00:26:42.920 ' 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:42.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.920 --rc genhtml_branch_coverage=1 00:26:42.920 --rc genhtml_function_coverage=1 00:26:42.920 --rc genhtml_legend=1 00:26:42.920 --rc geninfo_all_blocks=1 00:26:42.920 --rc geninfo_unexecuted_blocks=1 00:26:42.920 00:26:42.920 ' 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:42.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.920 --rc genhtml_branch_coverage=1 00:26:42.920 --rc genhtml_function_coverage=1 00:26:42.920 --rc genhtml_legend=1 00:26:42.920 --rc geninfo_all_blocks=1 00:26:42.920 --rc geninfo_unexecuted_blocks=1 00:26:42.920 00:26:42.920 ' 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.920 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:43.178 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:43.179 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:45.709 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.709 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:45.710 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:45.710 Found net devices under 0000:09:00.0: cvl_0_0 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:45.710 Found net devices under 0000:09:00.1: cvl_0_1 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:45.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:26:45.710 00:26:45.710 --- 10.0.0.2 ping statistics --- 00:26:45.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.710 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:26:45.710 00:26:45.710 --- 10.0.0.1 ping statistics --- 00:26:45.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.710 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3444945 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3444945 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3444945 ']' 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:45.710 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.710 [2024-11-20 09:17:22.313156] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:45.710 [2024-11-20 09:17:22.314366] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:26:45.710 [2024-11-20 09:17:22.314424] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.710 [2024-11-20 09:17:22.387267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:45.710 [2024-11-20 09:17:22.445152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.710 [2024-11-20 09:17:22.445200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.710 [2024-11-20 09:17:22.445229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.710 [2024-11-20 09:17:22.445240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.710 [2024-11-20 09:17:22.445250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.710 [2024-11-20 09:17:22.446812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.710 [2024-11-20 09:17:22.446886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.711 [2024-11-20 09:17:22.446889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.711 [2024-11-20 09:17:22.534208] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:45.711 [2024-11-20 09:17:22.534475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:45.711 [2024-11-20 09:17:22.534477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:45.711 [2024-11-20 09:17:22.534775] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.711 [2024-11-20 09:17:22.575578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.711 Malloc0 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.711 Delay0 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.711 [2024-11-20 09:17:22.655783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.711 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:45.968 [2024-11-20 09:17:22.798428] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:47.865 Initializing NVMe Controllers 00:26:47.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:47.865 controller IO queue size 128 less than required 00:26:47.865 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:47.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:47.865 Initialization complete. Launching workers. 00:26:47.865 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28304 00:26:47.865 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28365, failed to submit 66 00:26:47.865 success 28304, unsuccessful 61, failed 0 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:47.865 rmmod nvme_tcp 00:26:47.865 rmmod nvme_fabrics 00:26:47.865 rmmod nvme_keyring 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3444945 ']' 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3444945 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3444945 ']' 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3444945 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:47.865 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3444945 00:26:48.123 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:48.123 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:48.123 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3444945' 00:26:48.123 killing process with pid 3444945 00:26:48.123 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3444945 00:26:48.123 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3444945 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.382 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.285 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:50.285 00:26:50.285 real 0m7.412s 00:26:50.285 user 0m9.380s 00:26:50.285 sys 0m2.898s 00:26:50.285 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:50.285 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:50.285 ************************************ 00:26:50.285 END TEST nvmf_abort 00:26:50.285 ************************************ 00:26:50.285 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:50.285 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:50.285 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:50.285 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:50.285 ************************************ 00:26:50.285 START TEST nvmf_ns_hotplug_stress 00:26:50.285 ************************************ 00:26:50.285 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:50.544 * Looking for test storage... 00:26:50.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:50.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.544 --rc genhtml_branch_coverage=1 00:26:50.544 --rc genhtml_function_coverage=1 00:26:50.544 --rc genhtml_legend=1 00:26:50.544 --rc geninfo_all_blocks=1 00:26:50.544 --rc geninfo_unexecuted_blocks=1 00:26:50.544 00:26:50.544 ' 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:50.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.544 --rc genhtml_branch_coverage=1 00:26:50.544 --rc genhtml_function_coverage=1 00:26:50.544 --rc genhtml_legend=1 00:26:50.544 --rc geninfo_all_blocks=1 00:26:50.544 --rc geninfo_unexecuted_blocks=1 00:26:50.544 00:26:50.544 ' 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:50.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.544 --rc genhtml_branch_coverage=1 00:26:50.544 --rc genhtml_function_coverage=1 00:26:50.544 --rc genhtml_legend=1 00:26:50.544 --rc geninfo_all_blocks=1 00:26:50.544 --rc geninfo_unexecuted_blocks=1 00:26:50.544 00:26:50.544 ' 00:26:50.544 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:50.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.544 --rc genhtml_branch_coverage=1 00:26:50.544 --rc genhtml_function_coverage=1 00:26:50.544 --rc genhtml_legend=1 00:26:50.544 --rc geninfo_all_blocks=1 00:26:50.544 --rc geninfo_unexecuted_blocks=1 00:26:50.544 00:26:50.544 ' 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:50.545 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.074 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:53.075 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:53.075 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:53.075 Found net devices under 0000:09:00.0: cvl_0_0 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:53.075 Found net devices under 0000:09:00.1: cvl_0_1 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:26:53.075 00:26:53.075 --- 10.0.0.2 ping statistics --- 00:26:53.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.075 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:26:53.075 00:26:53.075 --- 10.0.0.1 ping statistics --- 00:26:53.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.075 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:53.075 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3447251 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3447251 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3447251 ']' 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:53.076 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:53.076 [2024-11-20 09:17:29.788246] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:53.076 [2024-11-20 09:17:29.789324] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:26:53.076 [2024-11-20 09:17:29.789389] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.076 [2024-11-20 09:17:29.862446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:53.076 [2024-11-20 09:17:29.918601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.076 [2024-11-20 09:17:29.918656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.076 [2024-11-20 09:17:29.918684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.076 [2024-11-20 09:17:29.918695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.076 [2024-11-20 09:17:29.918704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.076 [2024-11-20 09:17:29.920161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.076 [2024-11-20 09:17:29.920243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.076 [2024-11-20 09:17:29.920247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.076 [2024-11-20 09:17:30.004367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:53.076 [2024-11-20 09:17:30.004581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:53.076 [2024-11-20 09:17:30.004598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:53.076 [2024-11-20 09:17:30.004901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:53.076 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:53.076 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:26:53.076 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:53.076 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:53.076 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:53.076 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.076 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:53.076 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:53.333 [2024-11-20 09:17:30.325004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.333 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:53.898 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.898 [2024-11-20 09:17:30.893250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.898 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:54.155 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:54.721 Malloc0 00:26:54.721 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:54.978 Delay0 00:26:54.978 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:55.235 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:55.493 NULL1 00:26:55.493 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:55.750 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3447667 00:26:55.750 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:26:55.750 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:55.750 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:56.007 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:56.264 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:56.264 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:56.521 true 00:26:56.521 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:26:56.521 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.085 09:17:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.085 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:57.085 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:57.343 true 00:26:57.343 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:26:57.343 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:57.600 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:58.163 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:58.163 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:58.163 true 00:26:58.163 09:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:26:58.163 09:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:59.532 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:59.532 Read completed with error (sct=0, sc=11) 00:26:59.532 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:59.532 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:59.789 true 00:26:59.789 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:26:59.789 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:00.046 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.303 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:00.303 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:00.560 true 00:27:00.560 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:00.560 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:00.817 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:01.074 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:01.074 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:01.331 true 00:27:01.331 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:01.331 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.261 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:02.518 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:02.518 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:02.775 true 00:27:02.775 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:02.775 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:03.072 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.329 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:03.329 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:03.586 true 00:27:03.843 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:03.843 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.100 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.357 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:04.357 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:04.614 true 00:27:04.614 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:04.614 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:05.544 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.801 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:05.801 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:06.058 true 00:27:06.058 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:06.058 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:06.315 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:06.571 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:06.571 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:06.829 true 00:27:06.829 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:06.829 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.086 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.343 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:07.343 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:07.601 true 00:27:07.601 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:07.601 09:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.531 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:08.788 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:08.788 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:09.045 true 00:27:09.045 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:09.045 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:09.301 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:09.558 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:09.558 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:09.815 true 00:27:10.071 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:10.071 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:10.328 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:10.585 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:10.585 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:10.841 true 00:27:10.841 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:10.841 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.771 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:11.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:12.028 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:12.028 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:12.285 true 00:27:12.285 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:12.285 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:12.543 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:12.800 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:12.800 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:13.086 true 00:27:13.086 09:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:13.086 09:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.041 09:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.041 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:14.041 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:14.298 true 00:27:14.556 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:14.556 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.813 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.071 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:15.071 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:15.328 true 00:27:15.328 09:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:15.328 09:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:16.261 09:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:16.261 09:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:16.261 09:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:16.518 true 00:27:16.518 09:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:16.518 09:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.775 09:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.032 09:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:17.032 09:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:17.289 true 00:27:17.547 09:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:17.547 09:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.804 09:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.062 09:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:18.062 09:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:18.319 true 00:27:18.319 09:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:18.319 09:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.250 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.507 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:19.507 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:19.764 true 00:27:19.764 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:19.765 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.021 09:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.278 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:20.278 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:20.534 true 00:27:20.534 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:20.534 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.791 09:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:21.047 09:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:21.047 09:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:21.304 true 00:27:21.304 09:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:21.304 09:17:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.235 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.493 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.749 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:22.749 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:23.006 true 00:27:23.006 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:23.006 09:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.263 09:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.521 09:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:23.521 09:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:23.779 true 00:27:23.779 09:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:23.779 09:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.036 09:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:24.293 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:24.293 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:24.550 true 00:27:24.550 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:24.550 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:25.482 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:25.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:25.740 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:25.740 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:25.997 true 00:27:25.997 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:25.997 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:26.254 Initializing NVMe Controllers 00:27:26.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.254 Controller IO queue size 128, less than required. 00:27:26.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:26.254 Controller IO queue size 128, less than required. 00:27:26.254 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:26.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:26.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:26.254 Initialization complete. Launching workers. 00:27:26.254 ======================================================== 00:27:26.254 Latency(us) 00:27:26.254 Device Information : IOPS MiB/s Average min max 00:27:26.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 468.12 0.23 101155.64 3206.24 1040746.34 00:27:26.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7757.14 3.79 16450.94 2985.72 550667.49 00:27:26.254 ======================================================== 00:27:26.254 Total : 8225.25 4.02 21271.65 2985.72 1040746.34 00:27:26.254 00:27:26.254 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.512 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:27:26.512 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:27:26.769 true 00:27:26.770 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3447667 00:27:26.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3447667) - No such process 00:27:26.770 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3447667 00:27:26.770 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:27.334 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:27.334 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:27.334 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:27.334 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:27.334 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:27.334 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:27.591 null0 00:27:27.849 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:27.849 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:27.849 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:28.106 null1 00:27:28.106 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:28.106 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:28.107 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:28.364 null2 00:27:28.365 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:28.365 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:28.365 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:28.622 null3 00:27:28.622 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:28.622 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:28.622 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:28.880 null4 00:27:28.880 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:28.880 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:28.880 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:29.138 null5 00:27:29.138 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:29.138 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:29.138 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:29.395 null6 00:27:29.395 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:29.395 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:29.395 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:29.653 null7 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:29.653 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3451798 3451799 3451801 3451803 3451805 3451807 3451809 3451811 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:29.654 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:29.912 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:29.912 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:29.912 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.912 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:29.912 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:29.912 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:29.912 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:29.912 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:30.169 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.169 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.169 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:30.169 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.169 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.169 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:30.169 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.170 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:30.427 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:30.427 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:30.427 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.427 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:30.685 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:30.685 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:30.685 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:30.685 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:30.943 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:31.201 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:31.201 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:31.201 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:31.201 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.201 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:31.201 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:31.201 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:31.201 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.460 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:31.718 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:31.718 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:31.718 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:31.718 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:31.718 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:31.718 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.718 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:31.718 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:31.976 09:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:32.236 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:32.236 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:32.236 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:32.236 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:32.495 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:32.495 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.495 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:32.495 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.752 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:32.753 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.753 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:32.753 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.753 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:32.753 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:32.753 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:33.010 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:33.010 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:33.010 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:33.010 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.010 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:33.010 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:33.010 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:33.010 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.268 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:33.269 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.269 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.269 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:33.269 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.269 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.269 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:33.526 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:33.526 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:33.526 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.526 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:33.526 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:33.526 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:33.526 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:33.526 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:33.784 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:34.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:34.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:34.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:34.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:34.044 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.341 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:34.341 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:34.342 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:34.618 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:34.877 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:34.877 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:34.877 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:34.877 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:34.877 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:34.877 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:34.877 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.877 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.135 09:18:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:35.394 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:35.394 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:35.394 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:35.394 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:35.394 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:35.394 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:35.394 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.394 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.652 rmmod nvme_tcp 00:27:35.652 rmmod nvme_fabrics 00:27:35.652 rmmod nvme_keyring 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3447251 ']' 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3447251 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3447251 ']' 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3447251 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3447251 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3447251' 00:27:35.652 killing process with pid 3447251 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3447251 00:27:35.652 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3447251 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.910 09:18:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.446 09:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.446 00:27:38.446 real 0m47.690s 00:27:38.446 user 3m20.200s 00:27:38.446 sys 0m21.550s 00:27:38.446 09:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:38.446 09:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:38.446 ************************************ 00:27:38.446 END TEST nvmf_ns_hotplug_stress 00:27:38.446 ************************************ 00:27:38.446 09:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:38.446 09:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:38.446 09:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:38.446 09:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:38.446 ************************************ 00:27:38.446 START TEST nvmf_delete_subsystem 00:27:38.446 ************************************ 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:38.446 * Looking for test storage... 00:27:38.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:38.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.446 --rc genhtml_branch_coverage=1 00:27:38.446 --rc genhtml_function_coverage=1 00:27:38.446 --rc genhtml_legend=1 00:27:38.446 --rc geninfo_all_blocks=1 00:27:38.446 --rc geninfo_unexecuted_blocks=1 00:27:38.446 00:27:38.446 ' 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:38.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.446 --rc genhtml_branch_coverage=1 00:27:38.446 --rc genhtml_function_coverage=1 00:27:38.446 --rc genhtml_legend=1 00:27:38.446 --rc geninfo_all_blocks=1 00:27:38.446 --rc geninfo_unexecuted_blocks=1 00:27:38.446 00:27:38.446 ' 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:38.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.446 --rc genhtml_branch_coverage=1 00:27:38.446 --rc genhtml_function_coverage=1 00:27:38.446 --rc genhtml_legend=1 00:27:38.446 --rc geninfo_all_blocks=1 00:27:38.446 --rc geninfo_unexecuted_blocks=1 00:27:38.446 00:27:38.446 ' 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:38.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.446 --rc genhtml_branch_coverage=1 00:27:38.446 --rc genhtml_function_coverage=1 00:27:38.446 --rc genhtml_legend=1 00:27:38.446 --rc geninfo_all_blocks=1 00:27:38.446 --rc geninfo_unexecuted_blocks=1 00:27:38.446 00:27:38.446 ' 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.446 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.447 09:18:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:40.346 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:40.346 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.346 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:40.347 Found net devices under 0000:09:00.0: cvl_0_0 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:40.347 Found net devices under 0000:09:00.1: cvl_0_1 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.347 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:27:40.605 00:27:40.605 --- 10.0.0.2 ping statistics --- 00:27:40.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.605 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:27:40.605 00:27:40.605 --- 10.0.0.1 ping statistics --- 00:27:40.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.605 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3455135 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3455135 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3455135 ']' 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:40.605 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.605 [2024-11-20 09:18:17.551670] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:40.605 [2024-11-20 09:18:17.552848] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:27:40.605 [2024-11-20 09:18:17.552915] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.605 [2024-11-20 09:18:17.627278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:40.863 [2024-11-20 09:18:17.685255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.863 [2024-11-20 09:18:17.685340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.863 [2024-11-20 09:18:17.685356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.863 [2024-11-20 09:18:17.685368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.863 [2024-11-20 09:18:17.685392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.863 [2024-11-20 09:18:17.686886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.863 [2024-11-20 09:18:17.686892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.863 [2024-11-20 09:18:17.780695] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:40.863 [2024-11-20 09:18:17.780710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:40.863 [2024-11-20 09:18:17.780961] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:40.863 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:40.863 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:27:40.863 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.863 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:40.863 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.863 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.863 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.863 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.864 [2024-11-20 09:18:17.835576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.864 [2024-11-20 09:18:17.855753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.864 NULL1 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.864 Delay0 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3455210 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:40.864 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:41.122 [2024-11-20 09:18:17.937252] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:43.018 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:43.018 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.018 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 starting I/O failed: -6 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 starting I/O failed: -6 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 starting I/O failed: -6 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 starting I/O failed: -6 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 starting I/O failed: -6 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 starting I/O failed: -6 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 starting I/O failed: -6 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 starting I/O failed: -6 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 [2024-11-20 09:18:20.058271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc780000c40 is same with the state(6) to be set 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Write completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.276 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Write completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 Read completed with error (sct=0, sc=8) 00:27:43.277 starting I/O failed: -6 00:27:43.277 starting I/O failed: -6 00:27:43.277 starting I/O failed: -6 00:27:43.277 starting I/O failed: -6 00:27:43.277 starting I/O failed: -6 00:27:43.277 starting I/O failed: -6 00:27:44.211 [2024-11-20 09:18:21.033470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd869a0 is same with the state(6) to be set 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Write completed with error (sct=0, sc=8) 00:27:44.211 Write completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Write completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 [2024-11-20 09:18:21.052614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc78000d350 is same with the state(6) to be set 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Write completed with error (sct=0, sc=8) 00:27:44.211 Write completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Write completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.211 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 [2024-11-20 09:18:21.059763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd854a0 is same with the state(6) to be set 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 [2024-11-20 09:18:21.060014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd852c0 is same with the state(6) to be set 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 Read completed with error (sct=0, sc=8) 00:27:44.212 Write completed with error (sct=0, sc=8) 00:27:44.212 [2024-11-20 09:18:21.060234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd85860 is same with the state(6) to be set 00:27:44.212 Initializing NVMe Controllers 00:27:44.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:44.212 Controller IO queue size 128, less than required. 00:27:44.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:44.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:44.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:44.212 Initialization complete. Launching workers. 00:27:44.212 ======================================================== 00:27:44.212 Latency(us) 00:27:44.212 Device Information : IOPS MiB/s Average min max 00:27:44.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.51 0.09 962309.53 728.06 1012099.88 00:27:44.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 144.84 0.07 920733.94 490.86 1012115.87 00:27:44.212 ======================================================== 00:27:44.212 Total : 331.35 0.16 944135.77 490.86 1012115.87 00:27:44.212 00:27:44.212 [2024-11-20 09:18:21.061135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd869a0 (9): Bad file descriptor 00:27:44.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:44.212 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.212 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:44.212 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3455210 00:27:44.212 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3455210 00:27:44.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3455210) - No such process 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3455210 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3455210 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3455210 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:44.779 [2024-11-20 09:18:21.583738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3455613 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3455613 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:44.779 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:44.779 [2024-11-20 09:18:21.647272] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:45.343 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:45.344 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3455613 00:27:45.344 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:45.601 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:45.601 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3455613 00:27:45.601 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:46.166 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:46.166 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3455613 00:27:46.166 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:46.730 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:46.730 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3455613 00:27:46.730 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:47.293 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:47.293 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3455613 00:27:47.293 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:47.857 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:47.857 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3455613 00:27:47.857 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:48.114 Initializing NVMe Controllers 00:27:48.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:48.114 Controller IO queue size 128, less than required. 00:27:48.114 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:48.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:48.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:48.114 Initialization complete. Launching workers. 00:27:48.114 ======================================================== 00:27:48.114 Latency(us) 00:27:48.114 Device Information : IOPS MiB/s Average min max 00:27:48.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005675.17 1000191.56 1043640.13 00:27:48.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005152.84 1000180.04 1011709.47 00:27:48.114 ======================================================== 00:27:48.114 Total : 256.00 0.12 1005414.01 1000180.04 1043640.13 00:27:48.114 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3455613 00:27:48.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3455613) - No such process 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3455613 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.114 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:48.114 rmmod nvme_tcp 00:27:48.114 rmmod nvme_fabrics 00:27:48.371 rmmod nvme_keyring 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3455135 ']' 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3455135 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3455135 ']' 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3455135 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3455135 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3455135' 00:27:48.371 killing process with pid 3455135 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3455135 00:27:48.371 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3455135 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.630 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.530 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.530 00:27:50.530 real 0m12.466s 00:27:50.530 user 0m24.826s 00:27:50.530 sys 0m3.733s 00:27:50.530 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:50.530 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:50.530 ************************************ 00:27:50.530 END TEST nvmf_delete_subsystem 00:27:50.530 ************************************ 00:27:50.530 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:50.530 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:50.530 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:50.530 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:50.530 ************************************ 00:27:50.530 START TEST nvmf_host_management 00:27:50.530 ************************************ 00:27:50.530 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:50.788 * Looking for test storage... 00:27:50.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:50.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.788 --rc genhtml_branch_coverage=1 00:27:50.788 --rc genhtml_function_coverage=1 00:27:50.788 --rc genhtml_legend=1 00:27:50.788 --rc geninfo_all_blocks=1 00:27:50.788 --rc geninfo_unexecuted_blocks=1 00:27:50.788 00:27:50.788 ' 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:50.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.788 --rc genhtml_branch_coverage=1 00:27:50.788 --rc genhtml_function_coverage=1 00:27:50.788 --rc genhtml_legend=1 00:27:50.788 --rc geninfo_all_blocks=1 00:27:50.788 --rc geninfo_unexecuted_blocks=1 00:27:50.788 00:27:50.788 ' 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:50.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.788 --rc genhtml_branch_coverage=1 00:27:50.788 --rc genhtml_function_coverage=1 00:27:50.788 --rc genhtml_legend=1 00:27:50.788 --rc geninfo_all_blocks=1 00:27:50.788 --rc geninfo_unexecuted_blocks=1 00:27:50.788 00:27:50.788 ' 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:50.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.788 --rc genhtml_branch_coverage=1 00:27:50.788 --rc genhtml_function_coverage=1 00:27:50.788 --rc genhtml_legend=1 00:27:50.788 --rc geninfo_all_blocks=1 00:27:50.788 --rc geninfo_unexecuted_blocks=1 00:27:50.788 00:27:50.788 ' 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.788 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.789 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:52.688 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:52.688 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:52.688 Found net devices under 0000:09:00.0: cvl_0_0 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:52.688 Found net devices under 0000:09:00.1: cvl_0_1 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.688 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.689 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:27:52.947 00:27:52.947 --- 10.0.0.2 ping statistics --- 00:27:52.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.947 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:27:52.947 00:27:52.947 --- 10.0.0.1 ping statistics --- 00:27:52.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.947 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3457953 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3457953 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3457953 ']' 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:52.947 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:52.947 [2024-11-20 09:18:29.893042] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:52.947 [2024-11-20 09:18:29.894108] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:27:52.947 [2024-11-20 09:18:29.894172] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.947 [2024-11-20 09:18:29.969046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.205 [2024-11-20 09:18:30.035992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.205 [2024-11-20 09:18:30.036060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.205 [2024-11-20 09:18:30.036075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.205 [2024-11-20 09:18:30.036086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.205 [2024-11-20 09:18:30.036096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.205 [2024-11-20 09:18:30.037741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.205 [2024-11-20 09:18:30.037801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.205 [2024-11-20 09:18:30.037870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:53.205 [2024-11-20 09:18:30.037873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.205 [2024-11-20 09:18:30.127619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:53.205 [2024-11-20 09:18:30.127875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:53.205 [2024-11-20 09:18:30.128155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:53.205 [2024-11-20 09:18:30.128872] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:53.205 [2024-11-20 09:18:30.129112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:53.205 [2024-11-20 09:18:30.182555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.205 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:53.205 Malloc0 00:27:53.463 [2024-11-20 09:18:30.250737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3458120 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3458120 /var/tmp/bdevperf.sock 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3458120 ']' 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.463 { 00:27:53.463 "params": { 00:27:53.463 "name": "Nvme$subsystem", 00:27:53.463 "trtype": "$TEST_TRANSPORT", 00:27:53.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.463 "adrfam": "ipv4", 00:27:53.463 "trsvcid": "$NVMF_PORT", 00:27:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.463 "hdgst": ${hdgst:-false}, 00:27:53.463 "ddgst": ${ddgst:-false} 00:27:53.463 }, 00:27:53.463 "method": "bdev_nvme_attach_controller" 00:27:53.463 } 00:27:53.463 EOF 00:27:53.463 )") 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:53.463 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:53.463 "params": { 00:27:53.463 "name": "Nvme0", 00:27:53.463 "trtype": "tcp", 00:27:53.463 "traddr": "10.0.0.2", 00:27:53.463 "adrfam": "ipv4", 00:27:53.463 "trsvcid": "4420", 00:27:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:53.463 "hdgst": false, 00:27:53.463 "ddgst": false 00:27:53.463 }, 00:27:53.463 "method": "bdev_nvme_attach_controller" 00:27:53.463 }' 00:27:53.463 [2024-11-20 09:18:30.333561] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:27:53.464 [2024-11-20 09:18:30.333680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458120 ] 00:27:53.464 [2024-11-20 09:18:30.403784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.464 [2024-11-20 09:18:30.464931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.721 Running I/O for 10 seconds... 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:53.978 09:18:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.238 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:54.238 [2024-11-20 09:18:31.122780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.122838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.122869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.122886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.122903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.122917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.122932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.122946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.122962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.122975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.238 [2024-11-20 09:18:31.123465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.238 [2024-11-20 09:18:31.123480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.123979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.123995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.239 [2024-11-20 09:18:31.124634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.239 [2024-11-20 09:18:31.124649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.240 [2024-11-20 09:18:31.124663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.240 [2024-11-20 09:18:31.124678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.240 [2024-11-20 09:18:31.124691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.240 [2024-11-20 09:18:31.124706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.240 [2024-11-20 09:18:31.124719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.240 [2024-11-20 09:18:31.124734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:54.240 [2024-11-20 09:18:31.124748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.240 [2024-11-20 09:18:31.125979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:54.240 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.240 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:54.240 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.240 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:54.240 task offset: 83200 on job bdev=Nvme0n1 fails 00:27:54.240 00:27:54.240 Latency(us) 00:27:54.240 [2024-11-20T08:18:31.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.240 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.240 Job: Nvme0n1 ended in about 0.40 seconds with error 00:27:54.240 Verification LBA range: start 0x0 length 0x400 00:27:54.240 Nvme0n1 : 0.40 1600.24 100.01 160.02 0.00 35306.23 2621.44 34564.17 00:27:54.240 [2024-11-20T08:18:31.269Z] =================================================================================================================== 00:27:54.240 [2024-11-20T08:18:31.269Z] Total : 1600.24 100.01 160.02 0.00 35306.23 2621.44 34564.17 00:27:54.240 [2024-11-20 09:18:31.127892] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:54.240 [2024-11-20 09:18:31.127923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2149a40 (9): Bad file descriptor 00:27:54.240 [2024-11-20 09:18:31.129082] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:54.240 [2024-11-20 09:18:31.129184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:54.240 [2024-11-20 09:18:31.129212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:54.240 [2024-11-20 09:18:31.129239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:54.240 [2024-11-20 09:18:31.129255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:54.240 [2024-11-20 09:18:31.129269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:54.240 [2024-11-20 09:18:31.129282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2149a40 00:27:54.240 [2024-11-20 09:18:31.129335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2149a40 (9): Bad file descriptor 00:27:54.240 [2024-11-20 09:18:31.129361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:54.240 [2024-11-20 09:18:31.129375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:54.240 [2024-11-20 09:18:31.129390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:54.240 [2024-11-20 09:18:31.129406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:54.240 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.240 09:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3458120 00:27:55.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3458120) - No such process 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:55.173 { 00:27:55.173 "params": { 00:27:55.173 "name": "Nvme$subsystem", 00:27:55.173 "trtype": "$TEST_TRANSPORT", 00:27:55.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.173 "adrfam": "ipv4", 00:27:55.173 "trsvcid": "$NVMF_PORT", 00:27:55.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.173 "hdgst": ${hdgst:-false}, 00:27:55.173 "ddgst": ${ddgst:-false} 00:27:55.173 }, 00:27:55.173 "method": "bdev_nvme_attach_controller" 00:27:55.173 } 00:27:55.173 EOF 00:27:55.173 )") 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:55.173 09:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:55.173 "params": { 00:27:55.173 "name": "Nvme0", 00:27:55.173 "trtype": "tcp", 00:27:55.173 "traddr": "10.0.0.2", 00:27:55.173 "adrfam": "ipv4", 00:27:55.173 "trsvcid": "4420", 00:27:55.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.173 "hdgst": false, 00:27:55.173 "ddgst": false 00:27:55.173 }, 00:27:55.173 "method": "bdev_nvme_attach_controller" 00:27:55.173 }' 00:27:55.173 [2024-11-20 09:18:32.184155] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:27:55.173 [2024-11-20 09:18:32.184230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458270 ] 00:27:55.430 [2024-11-20 09:18:32.257342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.430 [2024-11-20 09:18:32.317709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.688 Running I/O for 1 seconds... 00:27:56.879 1600.00 IOPS, 100.00 MiB/s 00:27:56.879 Latency(us) 00:27:56.879 [2024-11-20T08:18:33.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.879 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.879 Verification LBA range: start 0x0 length 0x400 00:27:56.879 Nvme0n1 : 1.03 1614.40 100.90 0.00 0.00 39012.59 6456.51 35146.71 00:27:56.879 [2024-11-20T08:18:33.908Z] =================================================================================================================== 00:27:56.879 [2024-11-20T08:18:33.908Z] Total : 1614.40 100.90 0.00 0.00 39012.59 6456.51 35146.71 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:56.879 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:56.879 rmmod nvme_tcp 00:27:56.879 rmmod nvme_fabrics 00:27:56.879 rmmod nvme_keyring 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3457953 ']' 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3457953 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3457953 ']' 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3457953 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3457953 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3457953' 00:27:57.137 killing process with pid 3457953 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3457953 00:27:57.137 09:18:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3457953 00:27:57.137 [2024-11-20 09:18:34.148575] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.396 09:18:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.299 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:59.299 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:59.299 00:27:59.299 real 0m8.699s 00:27:59.299 user 0m17.508s 00:27:59.299 sys 0m3.758s 00:27:59.299 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:59.299 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.299 ************************************ 00:27:59.299 END TEST nvmf_host_management 00:27:59.299 ************************************ 00:27:59.299 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:59.299 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:59.299 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:59.299 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:59.299 ************************************ 00:27:59.299 START TEST nvmf_lvol 00:27:59.299 ************************************ 00:27:59.299 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:59.299 * Looking for test storage... 00:27:59.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:59.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.560 --rc genhtml_branch_coverage=1 00:27:59.560 --rc genhtml_function_coverage=1 00:27:59.560 --rc genhtml_legend=1 00:27:59.560 --rc geninfo_all_blocks=1 00:27:59.560 --rc geninfo_unexecuted_blocks=1 00:27:59.560 00:27:59.560 ' 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:59.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.560 --rc genhtml_branch_coverage=1 00:27:59.560 --rc genhtml_function_coverage=1 00:27:59.560 --rc genhtml_legend=1 00:27:59.560 --rc geninfo_all_blocks=1 00:27:59.560 --rc geninfo_unexecuted_blocks=1 00:27:59.560 00:27:59.560 ' 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:59.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.560 --rc genhtml_branch_coverage=1 00:27:59.560 --rc genhtml_function_coverage=1 00:27:59.560 --rc genhtml_legend=1 00:27:59.560 --rc geninfo_all_blocks=1 00:27:59.560 --rc geninfo_unexecuted_blocks=1 00:27:59.560 00:27:59.560 ' 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:59.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.560 --rc genhtml_branch_coverage=1 00:27:59.560 --rc genhtml_function_coverage=1 00:27:59.560 --rc genhtml_legend=1 00:27:59.560 --rc geninfo_all_blocks=1 00:27:59.560 --rc geninfo_unexecuted_blocks=1 00:27:59.560 00:27:59.560 ' 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:59.560 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:59.561 09:18:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.092 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:02.093 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:02.093 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:02.093 Found net devices under 0000:09:00.0: cvl_0_0 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:02.093 Found net devices under 0000:09:00.1: cvl_0_1 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:02.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:28:02.093 00:28:02.093 --- 10.0.0.2 ping statistics --- 00:28:02.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.093 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:28:02.093 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:28:02.093 00:28:02.093 --- 10.0.0.1 ping statistics --- 00:28:02.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.093 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3460476 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3460476 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3460476 ']' 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:02.094 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:02.094 [2024-11-20 09:18:38.864179] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:02.094 [2024-11-20 09:18:38.865179] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:28:02.094 [2024-11-20 09:18:38.865226] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.094 [2024-11-20 09:18:38.935088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:02.094 [2024-11-20 09:18:38.995603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.094 [2024-11-20 09:18:38.995661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.094 [2024-11-20 09:18:38.995690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.094 [2024-11-20 09:18:38.995702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.094 [2024-11-20 09:18:38.995711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.094 [2024-11-20 09:18:38.997242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.094 [2024-11-20 09:18:38.997318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.094 [2024-11-20 09:18:38.997322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.094 [2024-11-20 09:18:39.091127] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:02.094 [2024-11-20 09:18:39.091343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:02.094 [2024-11-20 09:18:39.091364] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:02.094 [2024-11-20 09:18:39.091623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:02.094 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:02.094 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:28:02.094 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:02.094 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:02.094 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:02.353 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.353 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:02.611 [2024-11-20 09:18:39.394164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.611 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:02.868 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:02.868 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:03.130 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:03.130 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:03.415 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:03.701 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b86bafe4-c2e6-413a-b649-053db7544388 00:28:03.701 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b86bafe4-c2e6-413a-b649-053db7544388 lvol 20 00:28:03.959 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=38121d16-4f88-4ca9-95f7-245130a7e0a2 00:28:03.959 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:04.216 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 38121d16-4f88-4ca9-95f7-245130a7e0a2 00:28:04.473 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:04.731 [2024-11-20 09:18:41.670335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.731 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:04.988 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3460904 00:28:04.988 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:04.988 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:06.361 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 38121d16-4f88-4ca9-95f7-245130a7e0a2 MY_SNAPSHOT 00:28:06.361 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f363f58f-8a44-41aa-b8fa-ee0a15c31283 00:28:06.361 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 38121d16-4f88-4ca9-95f7-245130a7e0a2 30 00:28:06.618 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f363f58f-8a44-41aa-b8fa-ee0a15c31283 MY_CLONE 00:28:07.183 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b6f2e512-45f9-402f-80aa-93d4a25ad030 00:28:07.183 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b6f2e512-45f9-402f-80aa-93d4a25ad030 00:28:07.749 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3460904 00:28:15.853 Initializing NVMe Controllers 00:28:15.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:15.853 Controller IO queue size 128, less than required. 00:28:15.853 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:15.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:15.853 Initialization complete. Launching workers. 00:28:15.853 ======================================================== 00:28:15.853 Latency(us) 00:28:15.853 Device Information : IOPS MiB/s Average min max 00:28:15.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10463.50 40.87 12236.66 4427.34 89727.04 00:28:15.853 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10473.50 40.91 12229.13 4900.80 69459.93 00:28:15.853 ======================================================== 00:28:15.853 Total : 20937.00 81.79 12232.89 4427.34 89727.04 00:28:15.853 00:28:15.853 09:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:15.853 09:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38121d16-4f88-4ca9-95f7-245130a7e0a2 00:28:16.111 09:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b86bafe4-c2e6-413a-b649-053db7544388 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.368 rmmod nvme_tcp 00:28:16.368 rmmod nvme_fabrics 00:28:16.368 rmmod nvme_keyring 00:28:16.368 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3460476 ']' 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3460476 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3460476 ']' 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3460476 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3460476 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3460476' 00:28:16.369 killing process with pid 3460476 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3460476 00:28:16.369 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3460476 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.933 09:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.836 00:28:18.836 real 0m19.439s 00:28:18.836 user 0m56.921s 00:28:18.836 sys 0m7.817s 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:18.836 ************************************ 00:28:18.836 END TEST nvmf_lvol 00:28:18.836 ************************************ 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:18.836 ************************************ 00:28:18.836 START TEST nvmf_lvs_grow 00:28:18.836 ************************************ 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:18.836 * Looking for test storage... 00:28:18.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:28:18.836 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:19.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.095 --rc genhtml_branch_coverage=1 00:28:19.095 --rc genhtml_function_coverage=1 00:28:19.095 --rc genhtml_legend=1 00:28:19.095 --rc geninfo_all_blocks=1 00:28:19.095 --rc geninfo_unexecuted_blocks=1 00:28:19.095 00:28:19.095 ' 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:19.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.095 --rc genhtml_branch_coverage=1 00:28:19.095 --rc genhtml_function_coverage=1 00:28:19.095 --rc genhtml_legend=1 00:28:19.095 --rc geninfo_all_blocks=1 00:28:19.095 --rc geninfo_unexecuted_blocks=1 00:28:19.095 00:28:19.095 ' 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:19.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.095 --rc genhtml_branch_coverage=1 00:28:19.095 --rc genhtml_function_coverage=1 00:28:19.095 --rc genhtml_legend=1 00:28:19.095 --rc geninfo_all_blocks=1 00:28:19.095 --rc geninfo_unexecuted_blocks=1 00:28:19.095 00:28:19.095 ' 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:19.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.095 --rc genhtml_branch_coverage=1 00:28:19.095 --rc genhtml_function_coverage=1 00:28:19.095 --rc genhtml_legend=1 00:28:19.095 --rc geninfo_all_blocks=1 00:28:19.095 --rc geninfo_unexecuted_blocks=1 00:28:19.095 00:28:19.095 ' 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.095 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.096 09:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:21.626 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:21.626 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:21.626 Found net devices under 0000:09:00.0: cvl_0_0 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:21.626 Found net devices under 0000:09:00.1: cvl_0_1 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.626 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:28:21.626 00:28:21.626 --- 10.0.0.2 ping statistics --- 00:28:21.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.626 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:28:21.627 00:28:21.627 --- 10.0.0.1 ping statistics --- 00:28:21.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.627 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3464164 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3464164 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3464164 ']' 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:21.627 [2024-11-20 09:18:58.294579] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:21.627 [2024-11-20 09:18:58.295607] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:28:21.627 [2024-11-20 09:18:58.295673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.627 [2024-11-20 09:18:58.365911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.627 [2024-11-20 09:18:58.422001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.627 [2024-11-20 09:18:58.422051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.627 [2024-11-20 09:18:58.422080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.627 [2024-11-20 09:18:58.422091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.627 [2024-11-20 09:18:58.422101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.627 [2024-11-20 09:18:58.422724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.627 [2024-11-20 09:18:58.514550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:21.627 [2024-11-20 09:18:58.514868] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.627 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:21.884 [2024-11-20 09:18:58.815371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:21.884 ************************************ 00:28:21.884 START TEST lvs_grow_clean 00:28:21.884 ************************************ 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:21.884 09:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:22.142 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:22.142 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:22.707 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=167f518a-04c1-4f69-a30e-77129020dc67 00:28:22.707 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:22.707 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:22.707 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:22.707 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:22.708 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 167f518a-04c1-4f69-a30e-77129020dc67 lvol 150 00:28:23.295 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665 00:28:23.295 09:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:23.295 09:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:23.295 [2024-11-20 09:19:00.255249] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:23.295 [2024-11-20 09:19:00.255384] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:23.295 true 00:28:23.295 09:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:23.295 09:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:23.553 09:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:23.553 09:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:23.811 09:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665 00:28:24.376 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:24.376 [2024-11-20 09:19:01.387641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.376 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3464601 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3464601 /var/tmp/bdevperf.sock 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3464601 ']' 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:24.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:24.943 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.943 [2024-11-20 09:19:01.762695] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:28:24.943 [2024-11-20 09:19:01.762781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464601 ] 00:28:24.943 [2024-11-20 09:19:01.828558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.943 [2024-11-20 09:19:01.889558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.200 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:25.200 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:28:25.200 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:25.458 Nvme0n1 00:28:25.458 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:26.022 [ 00:28:26.022 { 00:28:26.022 "name": "Nvme0n1", 00:28:26.022 "aliases": [ 00:28:26.022 "6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665" 00:28:26.022 ], 00:28:26.022 "product_name": "NVMe disk", 00:28:26.022 "block_size": 4096, 00:28:26.022 "num_blocks": 38912, 00:28:26.022 "uuid": "6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665", 00:28:26.022 "numa_id": 0, 00:28:26.022 "assigned_rate_limits": { 00:28:26.022 "rw_ios_per_sec": 0, 00:28:26.022 "rw_mbytes_per_sec": 0, 00:28:26.022 "r_mbytes_per_sec": 0, 00:28:26.022 "w_mbytes_per_sec": 0 00:28:26.022 }, 00:28:26.022 "claimed": false, 00:28:26.022 "zoned": false, 00:28:26.022 "supported_io_types": { 00:28:26.022 "read": true, 00:28:26.022 "write": true, 00:28:26.022 "unmap": true, 00:28:26.022 "flush": true, 00:28:26.022 "reset": true, 00:28:26.022 "nvme_admin": true, 00:28:26.022 "nvme_io": true, 00:28:26.022 "nvme_io_md": false, 00:28:26.022 "write_zeroes": true, 00:28:26.022 "zcopy": false, 00:28:26.022 "get_zone_info": false, 00:28:26.022 "zone_management": false, 00:28:26.022 "zone_append": false, 00:28:26.022 "compare": true, 00:28:26.022 "compare_and_write": true, 00:28:26.022 "abort": true, 00:28:26.022 "seek_hole": false, 00:28:26.022 "seek_data": false, 00:28:26.022 "copy": true, 00:28:26.022 "nvme_iov_md": false 00:28:26.022 }, 00:28:26.022 "memory_domains": [ 00:28:26.022 { 00:28:26.022 "dma_device_id": "system", 00:28:26.022 "dma_device_type": 1 00:28:26.022 } 00:28:26.022 ], 00:28:26.022 "driver_specific": { 00:28:26.022 "nvme": [ 00:28:26.022 { 00:28:26.022 "trid": { 00:28:26.022 "trtype": "TCP", 00:28:26.022 "adrfam": "IPv4", 00:28:26.022 "traddr": "10.0.0.2", 00:28:26.022 "trsvcid": "4420", 00:28:26.022 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:26.022 }, 00:28:26.022 "ctrlr_data": { 00:28:26.022 "cntlid": 1, 00:28:26.022 "vendor_id": "0x8086", 00:28:26.022 "model_number": "SPDK bdev Controller", 00:28:26.022 "serial_number": "SPDK0", 00:28:26.022 "firmware_revision": "25.01", 00:28:26.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:26.022 "oacs": { 00:28:26.022 "security": 0, 00:28:26.022 "format": 0, 00:28:26.022 "firmware": 0, 00:28:26.022 "ns_manage": 0 00:28:26.022 }, 00:28:26.023 "multi_ctrlr": true, 00:28:26.023 "ana_reporting": false 00:28:26.023 }, 00:28:26.023 "vs": { 00:28:26.023 "nvme_version": "1.3" 00:28:26.023 }, 00:28:26.023 "ns_data": { 00:28:26.023 "id": 1, 00:28:26.023 "can_share": true 00:28:26.023 } 00:28:26.023 } 00:28:26.023 ], 00:28:26.023 "mp_policy": "active_passive" 00:28:26.023 } 00:28:26.023 } 00:28:26.023 ] 00:28:26.023 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3464738 00:28:26.023 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:26.023 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:26.023 Running I/O for 10 seconds... 00:28:26.954 Latency(us) 00:28:26.954 [2024-11-20T08:19:03.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:26.954 Nvme0n1 : 1.00 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:28:26.954 [2024-11-20T08:19:03.983Z] =================================================================================================================== 00:28:26.954 [2024-11-20T08:19:03.983Z] Total : 14796.00 57.80 0.00 0.00 0.00 0.00 0.00 00:28:26.954 00:28:27.886 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:28.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:28.144 Nvme0n1 : 2.00 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:28:28.144 [2024-11-20T08:19:05.173Z] =================================================================================================================== 00:28:28.144 [2024-11-20T08:19:05.173Z] Total : 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:28:28.144 00:28:28.144 true 00:28:28.144 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:28.144 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:28.402 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:28.402 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:28.402 09:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3464738 00:28:28.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:28.967 Nvme0n1 : 3.00 15082.00 58.91 0.00 0.00 0.00 0.00 0.00 00:28:28.967 [2024-11-20T08:19:05.996Z] =================================================================================================================== 00:28:28.967 [2024-11-20T08:19:05.996Z] Total : 15082.00 58.91 0.00 0.00 0.00 0.00 0.00 00:28:28.967 00:28:30.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:30.339 Nvme0n1 : 4.00 15121.50 59.07 0.00 0.00 0.00 0.00 0.00 00:28:30.339 [2024-11-20T08:19:07.368Z] =================================================================================================================== 00:28:30.339 [2024-11-20T08:19:07.368Z] Total : 15121.50 59.07 0.00 0.00 0.00 0.00 0.00 00:28:30.339 00:28:31.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:31.270 Nvme0n1 : 5.00 15170.60 59.26 0.00 0.00 0.00 0.00 0.00 00:28:31.270 [2024-11-20T08:19:08.300Z] =================================================================================================================== 00:28:31.271 [2024-11-20T08:19:08.300Z] Total : 15170.60 59.26 0.00 0.00 0.00 0.00 0.00 00:28:31.271 00:28:32.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:32.203 Nvme0n1 : 6.00 15224.50 59.47 0.00 0.00 0.00 0.00 0.00 00:28:32.203 [2024-11-20T08:19:09.232Z] =================================================================================================================== 00:28:32.203 [2024-11-20T08:19:09.232Z] Total : 15224.50 59.47 0.00 0.00 0.00 0.00 0.00 00:28:32.203 00:28:33.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.135 Nvme0n1 : 7.00 15263.00 59.62 0.00 0.00 0.00 0.00 0.00 00:28:33.135 [2024-11-20T08:19:10.164Z] =================================================================================================================== 00:28:33.135 [2024-11-20T08:19:10.164Z] Total : 15263.00 59.62 0.00 0.00 0.00 0.00 0.00 00:28:33.135 00:28:34.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:34.066 Nvme0n1 : 8.00 15296.12 59.75 0.00 0.00 0.00 0.00 0.00 00:28:34.066 [2024-11-20T08:19:11.095Z] =================================================================================================================== 00:28:34.066 [2024-11-20T08:19:11.095Z] Total : 15296.12 59.75 0.00 0.00 0.00 0.00 0.00 00:28:34.066 00:28:35.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.028 Nvme0n1 : 9.00 15346.33 59.95 0.00 0.00 0.00 0.00 0.00 00:28:35.028 [2024-11-20T08:19:12.057Z] =================================================================================================================== 00:28:35.028 [2024-11-20T08:19:12.057Z] Total : 15346.33 59.95 0.00 0.00 0.00 0.00 0.00 00:28:35.028 00:28:35.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.985 Nvme0n1 : 10.00 15373.80 60.05 0.00 0.00 0.00 0.00 0.00 00:28:35.985 [2024-11-20T08:19:13.014Z] =================================================================================================================== 00:28:35.985 [2024-11-20T08:19:13.014Z] Total : 15373.80 60.05 0.00 0.00 0.00 0.00 0.00 00:28:35.985 00:28:35.985 00:28:35.985 Latency(us) 00:28:35.985 [2024-11-20T08:19:13.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.985 Nvme0n1 : 10.01 15375.25 60.06 0.00 0.00 8320.62 3592.34 18641.35 00:28:35.985 [2024-11-20T08:19:13.014Z] =================================================================================================================== 00:28:35.985 [2024-11-20T08:19:13.014Z] Total : 15375.25 60.06 0.00 0.00 8320.62 3592.34 18641.35 00:28:35.985 { 00:28:35.985 "results": [ 00:28:35.985 { 00:28:35.985 "job": "Nvme0n1", 00:28:35.985 "core_mask": "0x2", 00:28:35.985 "workload": "randwrite", 00:28:35.985 "status": "finished", 00:28:35.985 "queue_depth": 128, 00:28:35.985 "io_size": 4096, 00:28:35.985 "runtime": 10.00738, 00:28:35.985 "iops": 15375.25306323933, 00:28:35.985 "mibps": 60.05958227827863, 00:28:35.985 "io_failed": 0, 00:28:35.985 "io_timeout": 0, 00:28:35.985 "avg_latency_us": 8320.620320808246, 00:28:35.985 "min_latency_us": 3592.343703703704, 00:28:35.985 "max_latency_us": 18641.35111111111 00:28:35.985 } 00:28:35.985 ], 00:28:35.985 "core_count": 1 00:28:35.985 } 00:28:35.985 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3464601 00:28:35.985 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3464601 ']' 00:28:35.985 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3464601 00:28:35.985 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:28:35.985 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:35.985 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3464601 00:28:36.243 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:36.243 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:36.243 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3464601' 00:28:36.243 killing process with pid 3464601 00:28:36.243 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3464601 00:28:36.243 Received shutdown signal, test time was about 10.000000 seconds 00:28:36.243 00:28:36.243 Latency(us) 00:28:36.243 [2024-11-20T08:19:13.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.243 [2024-11-20T08:19:13.272Z] =================================================================================================================== 00:28:36.243 [2024-11-20T08:19:13.272Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.243 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3464601 00:28:36.243 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:36.808 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.066 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:37.066 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:37.324 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:37.324 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:37.324 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:37.582 [2024-11-20 09:19:14.391337] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:37.583 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:37.840 request: 00:28:37.840 { 00:28:37.840 "uuid": "167f518a-04c1-4f69-a30e-77129020dc67", 00:28:37.840 "method": "bdev_lvol_get_lvstores", 00:28:37.840 "req_id": 1 00:28:37.840 } 00:28:37.840 Got JSON-RPC error response 00:28:37.840 response: 00:28:37.840 { 00:28:37.840 "code": -19, 00:28:37.840 "message": "No such device" 00:28:37.840 } 00:28:37.840 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:28:37.840 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:37.840 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:37.840 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:37.840 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:38.098 aio_bdev 00:28:38.098 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665 00:28:38.098 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665 00:28:38.098 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:38.098 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:28:38.098 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:38.098 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:38.098 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:38.355 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665 -t 2000 00:28:38.613 [ 00:28:38.613 { 00:28:38.613 "name": "6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665", 00:28:38.613 "aliases": [ 00:28:38.613 "lvs/lvol" 00:28:38.613 ], 00:28:38.613 "product_name": "Logical Volume", 00:28:38.613 "block_size": 4096, 00:28:38.613 "num_blocks": 38912, 00:28:38.613 "uuid": "6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665", 00:28:38.613 "assigned_rate_limits": { 00:28:38.613 "rw_ios_per_sec": 0, 00:28:38.613 "rw_mbytes_per_sec": 0, 00:28:38.613 "r_mbytes_per_sec": 0, 00:28:38.613 "w_mbytes_per_sec": 0 00:28:38.613 }, 00:28:38.613 "claimed": false, 00:28:38.613 "zoned": false, 00:28:38.613 "supported_io_types": { 00:28:38.613 "read": true, 00:28:38.613 "write": true, 00:28:38.613 "unmap": true, 00:28:38.613 "flush": false, 00:28:38.613 "reset": true, 00:28:38.613 "nvme_admin": false, 00:28:38.613 "nvme_io": false, 00:28:38.613 "nvme_io_md": false, 00:28:38.613 "write_zeroes": true, 00:28:38.613 "zcopy": false, 00:28:38.613 "get_zone_info": false, 00:28:38.613 "zone_management": false, 00:28:38.613 "zone_append": false, 00:28:38.613 "compare": false, 00:28:38.613 "compare_and_write": false, 00:28:38.613 "abort": false, 00:28:38.613 "seek_hole": true, 00:28:38.613 "seek_data": true, 00:28:38.613 "copy": false, 00:28:38.613 "nvme_iov_md": false 00:28:38.613 }, 00:28:38.613 "driver_specific": { 00:28:38.613 "lvol": { 00:28:38.613 "lvol_store_uuid": "167f518a-04c1-4f69-a30e-77129020dc67", 00:28:38.613 "base_bdev": "aio_bdev", 00:28:38.613 "thin_provision": false, 00:28:38.613 "num_allocated_clusters": 38, 00:28:38.613 "snapshot": false, 00:28:38.613 "clone": false, 00:28:38.613 "esnap_clone": false 00:28:38.613 } 00:28:38.613 } 00:28:38.613 } 00:28:38.613 ] 00:28:38.613 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:28:38.613 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:38.613 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:38.871 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:38.871 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:38.871 09:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:39.128 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:39.128 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6b0b8b49-b0dd-4d6d-8bf0-e28e046bd665 00:28:39.386 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 167f518a-04c1-4f69-a30e-77129020dc67 00:28:39.643 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:39.901 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:39.901 00:28:39.901 real 0m18.061s 00:28:39.901 user 0m17.623s 00:28:39.901 sys 0m1.904s 00:28:39.901 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:39.901 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.901 ************************************ 00:28:39.901 END TEST lvs_grow_clean 00:28:39.901 ************************************ 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:40.160 ************************************ 00:28:40.160 START TEST lvs_grow_dirty 00:28:40.160 ************************************ 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:40.160 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:40.418 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:40.418 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:40.676 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e357c36d-501b-457b-a6c6-eaa20521682e 00:28:40.676 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:40.676 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:40.934 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:40.934 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:40.934 09:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e357c36d-501b-457b-a6c6-eaa20521682e lvol 150 00:28:41.216 09:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fa33bd27-6100-46e2-8296-3af6e1e9f9dd 00:28:41.216 09:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:41.216 09:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:41.474 [2024-11-20 09:19:18.347318] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:41.474 [2024-11-20 09:19:18.347442] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:41.474 true 00:28:41.474 09:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:41.474 09:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:41.732 09:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:41.732 09:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:41.990 09:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fa33bd27-6100-46e2-8296-3af6e1e9f9dd 00:28:42.248 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:42.506 [2024-11-20 09:19:19.423528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.506 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:42.763 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3466763 00:28:42.763 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:42.763 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:42.763 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3466763 /var/tmp/bdevperf.sock 00:28:42.763 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3466763 ']' 00:28:42.763 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:42.764 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:42.764 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:42.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:42.764 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:42.764 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:42.764 [2024-11-20 09:19:19.758451] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:28:42.764 [2024-11-20 09:19:19.758542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466763 ] 00:28:43.021 [2024-11-20 09:19:19.827476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.021 [2024-11-20 09:19:19.889273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.021 09:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:43.021 09:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:28:43.021 09:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:43.585 Nvme0n1 00:28:43.585 09:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:43.842 [ 00:28:43.842 { 00:28:43.842 "name": "Nvme0n1", 00:28:43.842 "aliases": [ 00:28:43.842 "fa33bd27-6100-46e2-8296-3af6e1e9f9dd" 00:28:43.842 ], 00:28:43.842 "product_name": "NVMe disk", 00:28:43.842 "block_size": 4096, 00:28:43.842 "num_blocks": 38912, 00:28:43.842 "uuid": "fa33bd27-6100-46e2-8296-3af6e1e9f9dd", 00:28:43.842 "numa_id": 0, 00:28:43.842 "assigned_rate_limits": { 00:28:43.842 "rw_ios_per_sec": 0, 00:28:43.842 "rw_mbytes_per_sec": 0, 00:28:43.842 "r_mbytes_per_sec": 0, 00:28:43.842 "w_mbytes_per_sec": 0 00:28:43.842 }, 00:28:43.842 "claimed": false, 00:28:43.842 "zoned": false, 00:28:43.842 "supported_io_types": { 00:28:43.842 "read": true, 00:28:43.842 "write": true, 00:28:43.842 "unmap": true, 00:28:43.842 "flush": true, 00:28:43.842 "reset": true, 00:28:43.842 "nvme_admin": true, 00:28:43.842 "nvme_io": true, 00:28:43.842 "nvme_io_md": false, 00:28:43.842 "write_zeroes": true, 00:28:43.842 "zcopy": false, 00:28:43.842 "get_zone_info": false, 00:28:43.842 "zone_management": false, 00:28:43.842 "zone_append": false, 00:28:43.842 "compare": true, 00:28:43.842 "compare_and_write": true, 00:28:43.842 "abort": true, 00:28:43.842 "seek_hole": false, 00:28:43.842 "seek_data": false, 00:28:43.842 "copy": true, 00:28:43.842 "nvme_iov_md": false 00:28:43.843 }, 00:28:43.843 "memory_domains": [ 00:28:43.843 { 00:28:43.843 "dma_device_id": "system", 00:28:43.843 "dma_device_type": 1 00:28:43.843 } 00:28:43.843 ], 00:28:43.843 "driver_specific": { 00:28:43.843 "nvme": [ 00:28:43.843 { 00:28:43.843 "trid": { 00:28:43.843 "trtype": "TCP", 00:28:43.843 "adrfam": "IPv4", 00:28:43.843 "traddr": "10.0.0.2", 00:28:43.843 "trsvcid": "4420", 00:28:43.843 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:43.843 }, 00:28:43.843 "ctrlr_data": { 00:28:43.843 "cntlid": 1, 00:28:43.843 "vendor_id": "0x8086", 00:28:43.843 "model_number": "SPDK bdev Controller", 00:28:43.843 "serial_number": "SPDK0", 00:28:43.843 "firmware_revision": "25.01", 00:28:43.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:43.843 "oacs": { 00:28:43.843 "security": 0, 00:28:43.843 "format": 0, 00:28:43.843 "firmware": 0, 00:28:43.843 "ns_manage": 0 00:28:43.843 }, 00:28:43.843 "multi_ctrlr": true, 00:28:43.843 "ana_reporting": false 00:28:43.843 }, 00:28:43.843 "vs": { 00:28:43.843 "nvme_version": "1.3" 00:28:43.843 }, 00:28:43.843 "ns_data": { 00:28:43.843 "id": 1, 00:28:43.843 "can_share": true 00:28:43.843 } 00:28:43.843 } 00:28:43.843 ], 00:28:43.843 "mp_policy": "active_passive" 00:28:43.843 } 00:28:43.843 } 00:28:43.843 ] 00:28:43.843 09:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3466914 00:28:43.843 09:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:43.843 09:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:43.843 Running I/O for 10 seconds... 00:28:45.214 Latency(us) 00:28:45.214 [2024-11-20T08:19:22.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:45.214 Nvme0n1 : 1.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:28:45.214 [2024-11-20T08:19:22.243Z] =================================================================================================================== 00:28:45.214 [2024-11-20T08:19:22.243Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:28:45.214 00:28:45.779 09:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:46.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.036 Nvme0n1 : 2.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:28:46.036 [2024-11-20T08:19:23.065Z] =================================================================================================================== 00:28:46.036 [2024-11-20T08:19:23.065Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:28:46.036 00:28:46.036 true 00:28:46.036 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:46.036 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:46.294 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:46.294 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:46.294 09:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3466914 00:28:46.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:46.860 Nvme0n1 : 3.00 15324.67 59.86 0.00 0.00 0.00 0.00 0.00 00:28:46.860 [2024-11-20T08:19:23.889Z] =================================================================================================================== 00:28:46.860 [2024-11-20T08:19:23.889Z] Total : 15324.67 59.86 0.00 0.00 0.00 0.00 0.00 00:28:46.860 00:28:48.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:48.232 Nvme0n1 : 4.00 15430.50 60.28 0.00 0.00 0.00 0.00 0.00 00:28:48.232 [2024-11-20T08:19:25.261Z] =================================================================================================================== 00:28:48.232 [2024-11-20T08:19:25.261Z] Total : 15430.50 60.28 0.00 0.00 0.00 0.00 0.00 00:28:48.232 00:28:49.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:49.163 Nvme0n1 : 5.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:28:49.163 [2024-11-20T08:19:26.192Z] =================================================================================================================== 00:28:49.163 [2024-11-20T08:19:26.192Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:28:49.163 00:28:50.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.094 Nvme0n1 : 6.00 15547.00 60.73 0.00 0.00 0.00 0.00 0.00 00:28:50.094 [2024-11-20T08:19:27.123Z] =================================================================================================================== 00:28:50.094 [2024-11-20T08:19:27.123Z] Total : 15547.00 60.73 0.00 0.00 0.00 0.00 0.00 00:28:50.094 00:28:51.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.025 Nvme0n1 : 7.00 15602.86 60.95 0.00 0.00 0.00 0.00 0.00 00:28:51.025 [2024-11-20T08:19:28.054Z] =================================================================================================================== 00:28:51.025 [2024-11-20T08:19:28.054Z] Total : 15602.86 60.95 0.00 0.00 0.00 0.00 0.00 00:28:51.025 00:28:51.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.957 Nvme0n1 : 8.00 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:28:51.957 [2024-11-20T08:19:28.986Z] =================================================================================================================== 00:28:51.957 [2024-11-20T08:19:28.986Z] Total : 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:28:51.957 00:28:52.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.890 Nvme0n1 : 9.00 15254.11 59.59 0.00 0.00 0.00 0.00 0.00 00:28:52.890 [2024-11-20T08:19:29.919Z] =================================================================================================================== 00:28:52.890 [2024-11-20T08:19:29.919Z] Total : 15254.11 59.59 0.00 0.00 0.00 0.00 0.00 00:28:52.890 00:28:54.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.264 Nvme0n1 : 10.00 15074.90 58.89 0.00 0.00 0.00 0.00 0.00 00:28:54.264 [2024-11-20T08:19:31.293Z] =================================================================================================================== 00:28:54.264 [2024-11-20T08:19:31.293Z] Total : 15074.90 58.89 0.00 0.00 0.00 0.00 0.00 00:28:54.264 00:28:54.264 00:28:54.264 Latency(us) 00:28:54.264 [2024-11-20T08:19:31.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.264 Nvme0n1 : 10.01 15079.66 58.90 0.00 0.00 8483.84 6310.87 20097.71 00:28:54.264 [2024-11-20T08:19:31.293Z] =================================================================================================================== 00:28:54.264 [2024-11-20T08:19:31.293Z] Total : 15079.66 58.90 0.00 0.00 8483.84 6310.87 20097.71 00:28:54.264 { 00:28:54.264 "results": [ 00:28:54.264 { 00:28:54.264 "job": "Nvme0n1", 00:28:54.264 "core_mask": "0x2", 00:28:54.264 "workload": "randwrite", 00:28:54.264 "status": "finished", 00:28:54.264 "queue_depth": 128, 00:28:54.264 "io_size": 4096, 00:28:54.264 "runtime": 10.005329, 00:28:54.264 "iops": 15079.664047029339, 00:28:54.264 "mibps": 58.904937683708354, 00:28:54.264 "io_failed": 0, 00:28:54.264 "io_timeout": 0, 00:28:54.264 "avg_latency_us": 8483.838925393975, 00:28:54.264 "min_latency_us": 6310.874074074074, 00:28:54.264 "max_latency_us": 20097.706666666665 00:28:54.264 } 00:28:54.264 ], 00:28:54.264 "core_count": 1 00:28:54.264 } 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3466763 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3466763 ']' 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3466763 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3466763 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3466763' 00:28:54.264 killing process with pid 3466763 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3466763 00:28:54.264 Received shutdown signal, test time was about 10.000000 seconds 00:28:54.264 00:28:54.264 Latency(us) 00:28:54.264 [2024-11-20T08:19:31.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.264 [2024-11-20T08:19:31.293Z] =================================================================================================================== 00:28:54.264 [2024-11-20T08:19:31.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.264 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3466763 00:28:54.264 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:54.522 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:54.781 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:54.781 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:55.039 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:55.039 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:55.039 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3464164 00:28:55.039 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3464164 00:28:55.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3464164 Killed "${NVMF_APP[@]}" "$@" 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3468230 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3468230 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3468230 ']' 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:55.039 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:55.297 [2024-11-20 09:19:32.077384] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:55.297 [2024-11-20 09:19:32.078410] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:28:55.297 [2024-11-20 09:19:32.078472] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.297 [2024-11-20 09:19:32.162703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.297 [2024-11-20 09:19:32.231617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.297 [2024-11-20 09:19:32.231672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.297 [2024-11-20 09:19:32.231703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.297 [2024-11-20 09:19:32.231718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.297 [2024-11-20 09:19:32.231731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.297 [2024-11-20 09:19:32.232432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.556 [2024-11-20 09:19:32.326930] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:55.556 [2024-11-20 09:19:32.327345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:55.556 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:55.556 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:28:55.556 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:55.556 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:55.556 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:55.556 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.556 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:55.815 [2024-11-20 09:19:32.687241] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:55.815 [2024-11-20 09:19:32.687404] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:55.815 [2024-11-20 09:19:32.687454] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:55.815 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:55.815 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fa33bd27-6100-46e2-8296-3af6e1e9f9dd 00:28:55.815 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=fa33bd27-6100-46e2-8296-3af6e1e9f9dd 00:28:55.815 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:55.815 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:28:55.815 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:55.815 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:55.815 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:56.073 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fa33bd27-6100-46e2-8296-3af6e1e9f9dd -t 2000 00:28:56.331 [ 00:28:56.331 { 00:28:56.331 "name": "fa33bd27-6100-46e2-8296-3af6e1e9f9dd", 00:28:56.331 "aliases": [ 00:28:56.331 "lvs/lvol" 00:28:56.331 ], 00:28:56.331 "product_name": "Logical Volume", 00:28:56.331 "block_size": 4096, 00:28:56.331 "num_blocks": 38912, 00:28:56.331 "uuid": "fa33bd27-6100-46e2-8296-3af6e1e9f9dd", 00:28:56.331 "assigned_rate_limits": { 00:28:56.331 "rw_ios_per_sec": 0, 00:28:56.331 "rw_mbytes_per_sec": 0, 00:28:56.331 "r_mbytes_per_sec": 0, 00:28:56.331 "w_mbytes_per_sec": 0 00:28:56.331 }, 00:28:56.331 "claimed": false, 00:28:56.331 "zoned": false, 00:28:56.331 "supported_io_types": { 00:28:56.331 "read": true, 00:28:56.331 "write": true, 00:28:56.331 "unmap": true, 00:28:56.331 "flush": false, 00:28:56.331 "reset": true, 00:28:56.331 "nvme_admin": false, 00:28:56.331 "nvme_io": false, 00:28:56.331 "nvme_io_md": false, 00:28:56.331 "write_zeroes": true, 00:28:56.331 "zcopy": false, 00:28:56.331 "get_zone_info": false, 00:28:56.331 "zone_management": false, 00:28:56.331 "zone_append": false, 00:28:56.331 "compare": false, 00:28:56.331 "compare_and_write": false, 00:28:56.331 "abort": false, 00:28:56.331 "seek_hole": true, 00:28:56.331 "seek_data": true, 00:28:56.331 "copy": false, 00:28:56.331 "nvme_iov_md": false 00:28:56.331 }, 00:28:56.331 "driver_specific": { 00:28:56.331 "lvol": { 00:28:56.331 "lvol_store_uuid": "e357c36d-501b-457b-a6c6-eaa20521682e", 00:28:56.331 "base_bdev": "aio_bdev", 00:28:56.331 "thin_provision": false, 00:28:56.331 "num_allocated_clusters": 38, 00:28:56.331 "snapshot": false, 00:28:56.331 "clone": false, 00:28:56.331 "esnap_clone": false 00:28:56.331 } 00:28:56.331 } 00:28:56.331 } 00:28:56.331 ] 00:28:56.331 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:28:56.331 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:56.331 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:56.589 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:56.589 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:56.589 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:56.846 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:56.847 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:57.104 [2024-11-20 09:19:34.081055] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:57.104 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:57.362 request: 00:28:57.362 { 00:28:57.362 "uuid": "e357c36d-501b-457b-a6c6-eaa20521682e", 00:28:57.362 "method": "bdev_lvol_get_lvstores", 00:28:57.362 "req_id": 1 00:28:57.362 } 00:28:57.362 Got JSON-RPC error response 00:28:57.362 response: 00:28:57.362 { 00:28:57.362 "code": -19, 00:28:57.362 "message": "No such device" 00:28:57.362 } 00:28:57.619 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:28:57.619 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:57.619 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:57.619 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:57.619 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:57.876 aio_bdev 00:28:57.876 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fa33bd27-6100-46e2-8296-3af6e1e9f9dd 00:28:57.876 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=fa33bd27-6100-46e2-8296-3af6e1e9f9dd 00:28:57.876 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:28:57.876 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:28:57.876 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:28:57.876 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:28:57.876 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:58.192 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fa33bd27-6100-46e2-8296-3af6e1e9f9dd -t 2000 00:28:58.192 [ 00:28:58.192 { 00:28:58.192 "name": "fa33bd27-6100-46e2-8296-3af6e1e9f9dd", 00:28:58.192 "aliases": [ 00:28:58.192 "lvs/lvol" 00:28:58.192 ], 00:28:58.192 "product_name": "Logical Volume", 00:28:58.192 "block_size": 4096, 00:28:58.192 "num_blocks": 38912, 00:28:58.192 "uuid": "fa33bd27-6100-46e2-8296-3af6e1e9f9dd", 00:28:58.192 "assigned_rate_limits": { 00:28:58.192 "rw_ios_per_sec": 0, 00:28:58.192 "rw_mbytes_per_sec": 0, 00:28:58.192 "r_mbytes_per_sec": 0, 00:28:58.192 "w_mbytes_per_sec": 0 00:28:58.192 }, 00:28:58.192 "claimed": false, 00:28:58.192 "zoned": false, 00:28:58.192 "supported_io_types": { 00:28:58.192 "read": true, 00:28:58.192 "write": true, 00:28:58.192 "unmap": true, 00:28:58.192 "flush": false, 00:28:58.192 "reset": true, 00:28:58.192 "nvme_admin": false, 00:28:58.192 "nvme_io": false, 00:28:58.192 "nvme_io_md": false, 00:28:58.192 "write_zeroes": true, 00:28:58.192 "zcopy": false, 00:28:58.192 "get_zone_info": false, 00:28:58.192 "zone_management": false, 00:28:58.192 "zone_append": false, 00:28:58.192 "compare": false, 00:28:58.192 "compare_and_write": false, 00:28:58.192 "abort": false, 00:28:58.192 "seek_hole": true, 00:28:58.192 "seek_data": true, 00:28:58.192 "copy": false, 00:28:58.192 "nvme_iov_md": false 00:28:58.192 }, 00:28:58.192 "driver_specific": { 00:28:58.192 "lvol": { 00:28:58.192 "lvol_store_uuid": "e357c36d-501b-457b-a6c6-eaa20521682e", 00:28:58.192 "base_bdev": "aio_bdev", 00:28:58.192 "thin_provision": false, 00:28:58.192 "num_allocated_clusters": 38, 00:28:58.192 "snapshot": false, 00:28:58.192 "clone": false, 00:28:58.192 "esnap_clone": false 00:28:58.192 } 00:28:58.192 } 00:28:58.192 } 00:28:58.192 ] 00:28:58.192 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:28:58.192 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:58.192 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:58.757 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:58.757 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:58.757 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:58.757 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:58.757 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fa33bd27-6100-46e2-8296-3af6e1e9f9dd 00:28:59.014 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e357c36d-501b-457b-a6c6-eaa20521682e 00:28:59.579 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:59.579 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:59.836 00:28:59.836 real 0m19.649s 00:28:59.836 user 0m36.680s 00:28:59.836 sys 0m4.684s 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:59.836 ************************************ 00:28:59.836 END TEST lvs_grow_dirty 00:28:59.836 ************************************ 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:28:59.836 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:59.837 nvmf_trace.0 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.837 rmmod nvme_tcp 00:28:59.837 rmmod nvme_fabrics 00:28:59.837 rmmod nvme_keyring 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3468230 ']' 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3468230 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3468230 ']' 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3468230 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3468230 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3468230' 00:28:59.837 killing process with pid 3468230 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3468230 00:28:59.837 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3468230 00:29:00.125 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:00.125 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:00.125 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:00.125 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:00.125 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:00.125 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:00.125 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:00.125 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:00.126 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:00.126 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.126 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.126 09:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.071 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.071 00:29:02.071 real 0m43.290s 00:29:02.071 user 0m56.161s 00:29:02.071 sys 0m8.626s 00:29:02.071 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:02.071 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:02.071 ************************************ 00:29:02.071 END TEST nvmf_lvs_grow 00:29:02.071 ************************************ 00:29:02.071 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:02.071 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:02.071 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:02.071 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:02.332 ************************************ 00:29:02.332 START TEST nvmf_bdev_io_wait 00:29:02.332 ************************************ 00:29:02.332 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:02.332 * Looking for test storage... 00:29:02.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:02.332 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:02.332 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:29:02.332 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:02.332 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:02.332 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:02.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.333 --rc genhtml_branch_coverage=1 00:29:02.333 --rc genhtml_function_coverage=1 00:29:02.333 --rc genhtml_legend=1 00:29:02.333 --rc geninfo_all_blocks=1 00:29:02.333 --rc geninfo_unexecuted_blocks=1 00:29:02.333 00:29:02.333 ' 00:29:02.333 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:02.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.333 --rc genhtml_branch_coverage=1 00:29:02.333 --rc genhtml_function_coverage=1 00:29:02.333 --rc genhtml_legend=1 00:29:02.334 --rc geninfo_all_blocks=1 00:29:02.334 --rc geninfo_unexecuted_blocks=1 00:29:02.334 00:29:02.334 ' 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:02.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.334 --rc genhtml_branch_coverage=1 00:29:02.334 --rc genhtml_function_coverage=1 00:29:02.334 --rc genhtml_legend=1 00:29:02.334 --rc geninfo_all_blocks=1 00:29:02.334 --rc geninfo_unexecuted_blocks=1 00:29:02.334 00:29:02.334 ' 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:02.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.334 --rc genhtml_branch_coverage=1 00:29:02.334 --rc genhtml_function_coverage=1 00:29:02.334 --rc genhtml_legend=1 00:29:02.334 --rc geninfo_all_blocks=1 00:29:02.334 --rc geninfo_unexecuted_blocks=1 00:29:02.334 00:29:02.334 ' 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.334 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:02.335 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.336 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:04.864 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:04.864 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.864 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:04.864 Found net devices under 0000:09:00.0: cvl_0_0 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:04.865 Found net devices under 0000:09:00.1: cvl_0_1 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:29:04.865 00:29:04.865 --- 10.0.0.2 ping statistics --- 00:29:04.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.865 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:29:04.865 00:29:04.865 --- 10.0.0.1 ping statistics --- 00:29:04.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.865 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3470780 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3470780 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3470780 ']' 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:04.865 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:04.865 [2024-11-20 09:19:41.719617] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:04.865 [2024-11-20 09:19:41.720697] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:04.865 [2024-11-20 09:19:41.720754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.865 [2024-11-20 09:19:41.795330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.865 [2024-11-20 09:19:41.856979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.866 [2024-11-20 09:19:41.857027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.866 [2024-11-20 09:19:41.857040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.866 [2024-11-20 09:19:41.857051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.866 [2024-11-20 09:19:41.857061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.866 [2024-11-20 09:19:41.860322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.866 [2024-11-20 09:19:41.860390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.866 [2024-11-20 09:19:41.860456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.866 [2024-11-20 09:19:41.860459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.866 [2024-11-20 09:19:41.860930] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:05.123 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:05.123 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:29:05.123 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.123 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:05.123 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:05.123 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.123 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:05.123 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:05.124 [2024-11-20 09:19:42.103506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:05.124 [2024-11-20 09:19:42.103736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:05.124 [2024-11-20 09:19:42.104668] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:05.124 [2024-11-20 09:19:42.105527] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:05.124 [2024-11-20 09:19:42.113099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:05.124 Malloc0 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.124 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:05.382 [2024-11-20 09:19:42.165338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3470917 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3470919 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3470921 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.382 { 00:29:05.382 "params": { 00:29:05.382 "name": "Nvme$subsystem", 00:29:05.382 "trtype": "$TEST_TRANSPORT", 00:29:05.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.382 "adrfam": "ipv4", 00:29:05.382 "trsvcid": "$NVMF_PORT", 00:29:05.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.382 "hdgst": ${hdgst:-false}, 00:29:05.382 "ddgst": ${ddgst:-false} 00:29:05.382 }, 00:29:05.382 "method": "bdev_nvme_attach_controller" 00:29:05.382 } 00:29:05.382 EOF 00:29:05.382 )") 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.382 { 00:29:05.382 "params": { 00:29:05.382 "name": "Nvme$subsystem", 00:29:05.382 "trtype": "$TEST_TRANSPORT", 00:29:05.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.382 "adrfam": "ipv4", 00:29:05.382 "trsvcid": "$NVMF_PORT", 00:29:05.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.382 "hdgst": ${hdgst:-false}, 00:29:05.382 "ddgst": ${ddgst:-false} 00:29:05.382 }, 00:29:05.382 "method": "bdev_nvme_attach_controller" 00:29:05.382 } 00:29:05.382 EOF 00:29:05.382 )") 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3470923 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.382 { 00:29:05.382 "params": { 00:29:05.382 "name": "Nvme$subsystem", 00:29:05.382 "trtype": "$TEST_TRANSPORT", 00:29:05.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.382 "adrfam": "ipv4", 00:29:05.382 "trsvcid": "$NVMF_PORT", 00:29:05.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.382 "hdgst": ${hdgst:-false}, 00:29:05.382 "ddgst": ${ddgst:-false} 00:29:05.382 }, 00:29:05.382 "method": "bdev_nvme_attach_controller" 00:29:05.382 } 00:29:05.382 EOF 00:29:05.382 )") 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.382 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.382 { 00:29:05.382 "params": { 00:29:05.382 "name": "Nvme$subsystem", 00:29:05.382 "trtype": "$TEST_TRANSPORT", 00:29:05.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.382 "adrfam": "ipv4", 00:29:05.382 "trsvcid": "$NVMF_PORT", 00:29:05.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.382 "hdgst": ${hdgst:-false}, 00:29:05.383 "ddgst": ${ddgst:-false} 00:29:05.383 }, 00:29:05.383 "method": "bdev_nvme_attach_controller" 00:29:05.383 } 00:29:05.383 EOF 00:29:05.383 )") 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3470917 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:05.383 "params": { 00:29:05.383 "name": "Nvme1", 00:29:05.383 "trtype": "tcp", 00:29:05.383 "traddr": "10.0.0.2", 00:29:05.383 "adrfam": "ipv4", 00:29:05.383 "trsvcid": "4420", 00:29:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.383 "hdgst": false, 00:29:05.383 "ddgst": false 00:29:05.383 }, 00:29:05.383 "method": "bdev_nvme_attach_controller" 00:29:05.383 }' 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:05.383 "params": { 00:29:05.383 "name": "Nvme1", 00:29:05.383 "trtype": "tcp", 00:29:05.383 "traddr": "10.0.0.2", 00:29:05.383 "adrfam": "ipv4", 00:29:05.383 "trsvcid": "4420", 00:29:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.383 "hdgst": false, 00:29:05.383 "ddgst": false 00:29:05.383 }, 00:29:05.383 "method": "bdev_nvme_attach_controller" 00:29:05.383 }' 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:05.383 "params": { 00:29:05.383 "name": "Nvme1", 00:29:05.383 "trtype": "tcp", 00:29:05.383 "traddr": "10.0.0.2", 00:29:05.383 "adrfam": "ipv4", 00:29:05.383 "trsvcid": "4420", 00:29:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.383 "hdgst": false, 00:29:05.383 "ddgst": false 00:29:05.383 }, 00:29:05.383 "method": "bdev_nvme_attach_controller" 00:29:05.383 }' 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:05.383 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:05.383 "params": { 00:29:05.383 "name": "Nvme1", 00:29:05.383 "trtype": "tcp", 00:29:05.383 "traddr": "10.0.0.2", 00:29:05.383 "adrfam": "ipv4", 00:29:05.383 "trsvcid": "4420", 00:29:05.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.383 "hdgst": false, 00:29:05.383 "ddgst": false 00:29:05.383 }, 00:29:05.383 "method": "bdev_nvme_attach_controller" 00:29:05.383 }' 00:29:05.383 [2024-11-20 09:19:42.216817] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:05.383 [2024-11-20 09:19:42.216814] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:05.383 [2024-11-20 09:19:42.216817] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:05.383 [2024-11-20 09:19:42.216814] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:05.383 [2024-11-20 09:19:42.216899] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 [2024-11-20 09:19:42.216903] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 09:19:42.216903] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 09:19:42.216903] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:29:05.383 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:05.383 --proc-type=auto ] 00:29:05.383 --proc-type=auto ] 00:29:05.383 [2024-11-20 09:19:42.407152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.641 [2024-11-20 09:19:42.462778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:05.641 [2024-11-20 09:19:42.510787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.641 [2024-11-20 09:19:42.567555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:05.641 [2024-11-20 09:19:42.613072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.898 [2024-11-20 09:19:42.670946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.898 [2024-11-20 09:19:42.690825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.898 [2024-11-20 09:19:42.743982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:05.898 Running I/O for 1 seconds... 00:29:05.898 Running I/O for 1 seconds... 00:29:05.898 Running I/O for 1 seconds... 00:29:06.155 Running I/O for 1 seconds... 00:29:07.086 6791.00 IOPS, 26.53 MiB/s [2024-11-20T08:19:44.115Z] 191384.00 IOPS, 747.59 MiB/s 00:29:07.086 Latency(us) 00:29:07.086 [2024-11-20T08:19:44.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.086 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:07.086 Nvme1n1 : 1.00 191025.23 746.19 0.00 0.00 666.48 292.79 1844.72 00:29:07.086 [2024-11-20T08:19:44.115Z] =================================================================================================================== 00:29:07.086 [2024-11-20T08:19:44.116Z] Total : 191025.23 746.19 0.00 0.00 666.48 292.79 1844.72 00:29:07.087 00:29:07.087 Latency(us) 00:29:07.087 [2024-11-20T08:19:44.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.087 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:07.087 Nvme1n1 : 1.06 6541.23 25.55 0.00 0.00 18673.66 4199.16 58254.22 00:29:07.087 [2024-11-20T08:19:44.116Z] =================================================================================================================== 00:29:07.087 [2024-11-20T08:19:44.116Z] Total : 6541.23 25.55 0.00 0.00 18673.66 4199.16 58254.22 00:29:07.087 6670.00 IOPS, 26.05 MiB/s 00:29:07.087 Latency(us) 00:29:07.087 [2024-11-20T08:19:44.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.087 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:07.087 Nvme1n1 : 1.01 6789.96 26.52 0.00 0.00 18793.93 4684.61 30292.20 00:29:07.087 [2024-11-20T08:19:44.116Z] =================================================================================================================== 00:29:07.087 [2024-11-20T08:19:44.116Z] Total : 6789.96 26.52 0.00 0.00 18793.93 4684.61 30292.20 00:29:07.087 9291.00 IOPS, 36.29 MiB/s 00:29:07.087 Latency(us) 00:29:07.087 [2024-11-20T08:19:44.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.087 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:07.087 Nvme1n1 : 1.01 9362.70 36.57 0.00 0.00 13618.90 2051.03 19126.80 00:29:07.087 [2024-11-20T08:19:44.116Z] =================================================================================================================== 00:29:07.087 [2024-11-20T08:19:44.116Z] Total : 9362.70 36.57 0.00 0.00 13618.90 2051.03 19126.80 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3470919 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3470921 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3470923 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.344 rmmod nvme_tcp 00:29:07.344 rmmod nvme_fabrics 00:29:07.344 rmmod nvme_keyring 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3470780 ']' 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3470780 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3470780 ']' 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3470780 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3470780 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3470780' 00:29:07.344 killing process with pid 3470780 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3470780 00:29:07.344 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3470780 00:29:07.602 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.602 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:07.602 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:07.602 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:07.602 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:07.603 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:07.603 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:07.603 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:07.603 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:07.603 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.603 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.603 09:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.506 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:09.506 00:29:09.506 real 0m7.392s 00:29:09.506 user 0m14.482s 00:29:09.506 sys 0m3.980s 00:29:09.506 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:09.506 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:09.506 ************************************ 00:29:09.506 END TEST nvmf_bdev_io_wait 00:29:09.506 ************************************ 00:29:09.506 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:09.506 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:09.506 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:09.506 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:09.764 ************************************ 00:29:09.764 START TEST nvmf_queue_depth 00:29:09.764 ************************************ 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:09.765 * Looking for test storage... 00:29:09.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:09.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.765 --rc genhtml_branch_coverage=1 00:29:09.765 --rc genhtml_function_coverage=1 00:29:09.765 --rc genhtml_legend=1 00:29:09.765 --rc geninfo_all_blocks=1 00:29:09.765 --rc geninfo_unexecuted_blocks=1 00:29:09.765 00:29:09.765 ' 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:09.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.765 --rc genhtml_branch_coverage=1 00:29:09.765 --rc genhtml_function_coverage=1 00:29:09.765 --rc genhtml_legend=1 00:29:09.765 --rc geninfo_all_blocks=1 00:29:09.765 --rc geninfo_unexecuted_blocks=1 00:29:09.765 00:29:09.765 ' 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:09.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.765 --rc genhtml_branch_coverage=1 00:29:09.765 --rc genhtml_function_coverage=1 00:29:09.765 --rc genhtml_legend=1 00:29:09.765 --rc geninfo_all_blocks=1 00:29:09.765 --rc geninfo_unexecuted_blocks=1 00:29:09.765 00:29:09.765 ' 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:09.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.765 --rc genhtml_branch_coverage=1 00:29:09.765 --rc genhtml_function_coverage=1 00:29:09.765 --rc genhtml_legend=1 00:29:09.765 --rc geninfo_all_blocks=1 00:29:09.765 --rc geninfo_unexecuted_blocks=1 00:29:09.765 00:29:09.765 ' 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:09.765 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:09.766 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.296 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.297 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:12.298 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:12.298 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.298 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:12.299 Found net devices under 0000:09:00.0: cvl_0_0 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.299 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:12.299 Found net devices under 0000:09:00.1: cvl_0_1 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.300 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:12.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:29:12.300 00:29:12.300 --- 10.0.0.2 ping statistics --- 00:29:12.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.300 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:29:12.300 00:29:12.300 --- 10.0.0.1 ping statistics --- 00:29:12.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.300 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3473144 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3473144 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3473144 ']' 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:12.300 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.300 [2024-11-20 09:19:49.087714] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:12.300 [2024-11-20 09:19:49.088791] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:12.300 [2024-11-20 09:19:49.088856] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.300 [2024-11-20 09:19:49.164504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.300 [2024-11-20 09:19:49.222451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.300 [2024-11-20 09:19:49.222505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.300 [2024-11-20 09:19:49.222534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.300 [2024-11-20 09:19:49.222546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.300 [2024-11-20 09:19:49.222556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.300 [2024-11-20 09:19:49.223126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.300 [2024-11-20 09:19:49.310385] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:12.302 [2024-11-20 09:19:49.310716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.563 [2024-11-20 09:19:49.363757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.563 Malloc0 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.563 [2024-11-20 09:19:49.423840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3473174 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3473174 /var/tmp/bdevperf.sock 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3473174 ']' 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:12.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:12.563 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:12.563 [2024-11-20 09:19:49.471242] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:12.564 [2024-11-20 09:19:49.471326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473174 ] 00:29:12.564 [2024-11-20 09:19:49.542428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.821 [2024-11-20 09:19:49.603724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.821 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:12.821 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:29:12.821 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:12.821 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.821 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:13.077 NVMe0n1 00:29:13.077 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.077 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:13.077 Running I/O for 10 seconds... 00:29:15.394 8192.00 IOPS, 32.00 MiB/s [2024-11-20T08:19:53.357Z] 8352.00 IOPS, 32.62 MiB/s [2024-11-20T08:19:54.288Z] 8533.33 IOPS, 33.33 MiB/s [2024-11-20T08:19:55.220Z] 8553.25 IOPS, 33.41 MiB/s [2024-11-20T08:19:56.151Z] 8603.40 IOPS, 33.61 MiB/s [2024-11-20T08:19:57.082Z] 8616.17 IOPS, 33.66 MiB/s [2024-11-20T08:19:58.452Z] 8634.14 IOPS, 33.73 MiB/s [2024-11-20T08:19:59.386Z] 8673.12 IOPS, 33.88 MiB/s [2024-11-20T08:20:00.317Z] 8653.33 IOPS, 33.80 MiB/s [2024-11-20T08:20:00.317Z] 8700.10 IOPS, 33.98 MiB/s 00:29:23.288 Latency(us) 00:29:23.288 [2024-11-20T08:20:00.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.288 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:23.288 Verification LBA range: start 0x0 length 0x4000 00:29:23.288 NVMe0n1 : 10.14 8676.15 33.89 0.00 0.00 117076.91 23787.14 89323.14 00:29:23.288 [2024-11-20T08:20:00.317Z] =================================================================================================================== 00:29:23.288 [2024-11-20T08:20:00.317Z] Total : 8676.15 33.89 0.00 0.00 117076.91 23787.14 89323.14 00:29:23.288 { 00:29:23.288 "results": [ 00:29:23.288 { 00:29:23.288 "job": "NVMe0n1", 00:29:23.288 "core_mask": "0x1", 00:29:23.288 "workload": "verify", 00:29:23.288 "status": "finished", 00:29:23.288 "verify_range": { 00:29:23.288 "start": 0, 00:29:23.288 "length": 16384 00:29:23.288 }, 00:29:23.288 "queue_depth": 1024, 00:29:23.288 "io_size": 4096, 00:29:23.288 "runtime": 10.143785, 00:29:23.288 "iops": 8676.149977547828, 00:29:23.288 "mibps": 33.8912108497962, 00:29:23.288 "io_failed": 0, 00:29:23.288 "io_timeout": 0, 00:29:23.288 "avg_latency_us": 117076.91000262178, 00:29:23.288 "min_latency_us": 23787.140740740742, 00:29:23.288 "max_latency_us": 89323.14074074074 00:29:23.288 } 00:29:23.288 ], 00:29:23.288 "core_count": 1 00:29:23.288 } 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3473174 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3473174 ']' 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3473174 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3473174 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3473174' 00:29:23.288 killing process with pid 3473174 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3473174 00:29:23.288 Received shutdown signal, test time was about 10.000000 seconds 00:29:23.288 00:29:23.288 Latency(us) 00:29:23.288 [2024-11-20T08:20:00.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.288 [2024-11-20T08:20:00.317Z] =================================================================================================================== 00:29:23.288 [2024-11-20T08:20:00.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.288 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3473174 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.545 rmmod nvme_tcp 00:29:23.545 rmmod nvme_fabrics 00:29:23.545 rmmod nvme_keyring 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3473144 ']' 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3473144 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3473144 ']' 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3473144 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3473144 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3473144' 00:29:23.545 killing process with pid 3473144 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3473144 00:29:23.545 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3473144 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.803 09:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.340 00:29:26.340 real 0m16.271s 00:29:26.340 user 0m22.483s 00:29:26.340 sys 0m3.418s 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:26.340 ************************************ 00:29:26.340 END TEST nvmf_queue_depth 00:29:26.340 ************************************ 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:26.340 ************************************ 00:29:26.340 START TEST nvmf_target_multipath 00:29:26.340 ************************************ 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:26.340 * Looking for test storage... 00:29:26.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:26.340 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:29:26.341 09:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.341 --rc genhtml_branch_coverage=1 00:29:26.341 --rc genhtml_function_coverage=1 00:29:26.341 --rc genhtml_legend=1 00:29:26.341 --rc geninfo_all_blocks=1 00:29:26.341 --rc geninfo_unexecuted_blocks=1 00:29:26.341 00:29:26.341 ' 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.341 --rc genhtml_branch_coverage=1 00:29:26.341 --rc genhtml_function_coverage=1 00:29:26.341 --rc genhtml_legend=1 00:29:26.341 --rc geninfo_all_blocks=1 00:29:26.341 --rc geninfo_unexecuted_blocks=1 00:29:26.341 00:29:26.341 ' 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.341 --rc genhtml_branch_coverage=1 00:29:26.341 --rc genhtml_function_coverage=1 00:29:26.341 --rc genhtml_legend=1 00:29:26.341 --rc geninfo_all_blocks=1 00:29:26.341 --rc geninfo_unexecuted_blocks=1 00:29:26.341 00:29:26.341 ' 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.341 --rc genhtml_branch_coverage=1 00:29:26.341 --rc genhtml_function_coverage=1 00:29:26.341 --rc genhtml_legend=1 00:29:26.341 --rc geninfo_all_blocks=1 00:29:26.341 --rc geninfo_unexecuted_blocks=1 00:29:26.341 00:29:26.341 ' 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.341 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.342 09:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.240 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:28.241 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:28.241 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:28.241 Found net devices under 0000:09:00.0: cvl_0_0 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:28.241 Found net devices under 0000:09:00.1: cvl_0_1 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:28.241 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:28.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:29:28.499 00:29:28.499 --- 10.0.0.2 ping statistics --- 00:29:28.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.499 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:28.499 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:29:28.499 00:29:28.499 --- 10.0.0.1 ping statistics --- 00:29:28.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.499 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:28.499 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.499 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:28.500 only one NIC for nvmf test 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.500 rmmod nvme_tcp 00:29:28.500 rmmod nvme_fabrics 00:29:28.500 rmmod nvme_keyring 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.500 09:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:30.402 00:29:30.402 real 0m4.540s 00:29:30.402 user 0m0.896s 00:29:30.402 sys 0m1.666s 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:30.402 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:30.402 ************************************ 00:29:30.402 END TEST nvmf_target_multipath 00:29:30.402 ************************************ 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:30.696 ************************************ 00:29:30.696 START TEST nvmf_zcopy 00:29:30.696 ************************************ 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:30.696 * Looking for test storage... 00:29:30.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:30.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.696 --rc genhtml_branch_coverage=1 00:29:30.696 --rc genhtml_function_coverage=1 00:29:30.696 --rc genhtml_legend=1 00:29:30.696 --rc geninfo_all_blocks=1 00:29:30.696 --rc geninfo_unexecuted_blocks=1 00:29:30.696 00:29:30.696 ' 00:29:30.696 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:30.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.696 --rc genhtml_branch_coverage=1 00:29:30.696 --rc genhtml_function_coverage=1 00:29:30.696 --rc genhtml_legend=1 00:29:30.696 --rc geninfo_all_blocks=1 00:29:30.696 --rc geninfo_unexecuted_blocks=1 00:29:30.696 00:29:30.697 ' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:30.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.697 --rc genhtml_branch_coverage=1 00:29:30.697 --rc genhtml_function_coverage=1 00:29:30.697 --rc genhtml_legend=1 00:29:30.697 --rc geninfo_all_blocks=1 00:29:30.697 --rc geninfo_unexecuted_blocks=1 00:29:30.697 00:29:30.697 ' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:30.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.697 --rc genhtml_branch_coverage=1 00:29:30.697 --rc genhtml_function_coverage=1 00:29:30.697 --rc genhtml_legend=1 00:29:30.697 --rc geninfo_all_blocks=1 00:29:30.697 --rc geninfo_unexecuted_blocks=1 00:29:30.697 00:29:30.697 ' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.697 09:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.228 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.228 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.228 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.228 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:33.229 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:33.229 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:33.229 Found net devices under 0000:09:00.0: cvl_0_0 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:33.229 Found net devices under 0000:09:00.1: cvl_0_1 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:29:33.229 00:29:33.229 --- 10.0.0.2 ping statistics --- 00:29:33.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.229 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:29:33.229 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:29:33.230 00:29:33.230 --- 10.0.0.1 ping statistics --- 00:29:33.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.230 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3478349 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3478349 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3478349 ']' 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:33.230 09:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.230 [2024-11-20 09:20:09.896584] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:33.230 [2024-11-20 09:20:09.897665] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:33.230 [2024-11-20 09:20:09.897723] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.230 [2024-11-20 09:20:09.968693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.230 [2024-11-20 09:20:10.026670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.230 [2024-11-20 09:20:10.026729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.230 [2024-11-20 09:20:10.026744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.230 [2024-11-20 09:20:10.026757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.230 [2024-11-20 09:20:10.026767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.230 [2024-11-20 09:20:10.027374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.230 [2024-11-20 09:20:10.117779] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:33.230 [2024-11-20 09:20:10.118092] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.230 [2024-11-20 09:20:10.175974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.230 [2024-11-20 09:20:10.192131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.230 malloc0 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:33.230 { 00:29:33.230 "params": { 00:29:33.230 "name": "Nvme$subsystem", 00:29:33.230 "trtype": "$TEST_TRANSPORT", 00:29:33.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:33.230 "adrfam": "ipv4", 00:29:33.230 "trsvcid": "$NVMF_PORT", 00:29:33.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:33.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:33.230 "hdgst": ${hdgst:-false}, 00:29:33.230 "ddgst": ${ddgst:-false} 00:29:33.230 }, 00:29:33.230 "method": "bdev_nvme_attach_controller" 00:29:33.230 } 00:29:33.230 EOF 00:29:33.230 )") 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:33.230 09:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:33.230 "params": { 00:29:33.230 "name": "Nvme1", 00:29:33.230 "trtype": "tcp", 00:29:33.230 "traddr": "10.0.0.2", 00:29:33.230 "adrfam": "ipv4", 00:29:33.230 "trsvcid": "4420", 00:29:33.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:33.230 "hdgst": false, 00:29:33.230 "ddgst": false 00:29:33.230 }, 00:29:33.230 "method": "bdev_nvme_attach_controller" 00:29:33.230 }' 00:29:33.488 [2024-11-20 09:20:10.279273] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:33.488 [2024-11-20 09:20:10.279375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3478376 ] 00:29:33.488 [2024-11-20 09:20:10.348526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.488 [2024-11-20 09:20:10.414698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.746 Running I/O for 10 seconds... 00:29:36.049 5089.00 IOPS, 39.76 MiB/s [2024-11-20T08:20:14.012Z] 5170.50 IOPS, 40.39 MiB/s [2024-11-20T08:20:14.945Z] 5195.67 IOPS, 40.59 MiB/s [2024-11-20T08:20:15.879Z] 5194.00 IOPS, 40.58 MiB/s [2024-11-20T08:20:16.812Z] 5201.40 IOPS, 40.64 MiB/s [2024-11-20T08:20:18.182Z] 5205.67 IOPS, 40.67 MiB/s [2024-11-20T08:20:19.116Z] 5209.00 IOPS, 40.70 MiB/s [2024-11-20T08:20:20.050Z] 5208.12 IOPS, 40.69 MiB/s [2024-11-20T08:20:20.982Z] 5205.11 IOPS, 40.66 MiB/s [2024-11-20T08:20:20.982Z] 5202.40 IOPS, 40.64 MiB/s 00:29:43.953 Latency(us) 00:29:43.953 [2024-11-20T08:20:20.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.953 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:43.953 Verification LBA range: start 0x0 length 0x1000 00:29:43.954 Nvme1n1 : 10.02 5203.96 40.66 0.00 0.00 24531.13 4247.70 32234.00 00:29:43.954 [2024-11-20T08:20:20.983Z] =================================================================================================================== 00:29:43.954 [2024-11-20T08:20:20.983Z] Total : 5203.96 40.66 0.00 0.00 24531.13 4247.70 32234.00 00:29:44.212 09:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3479576 00:29:44.212 09:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:44.212 09:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:44.212 09:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:44.212 09:20:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:44.212 09:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:44.212 09:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:44.212 09:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:44.212 09:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:44.212 { 00:29:44.212 "params": { 00:29:44.212 "name": "Nvme$subsystem", 00:29:44.212 "trtype": "$TEST_TRANSPORT", 00:29:44.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.212 "adrfam": "ipv4", 00:29:44.212 "trsvcid": "$NVMF_PORT", 00:29:44.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.212 "hdgst": ${hdgst:-false}, 00:29:44.212 "ddgst": ${ddgst:-false} 00:29:44.212 }, 00:29:44.212 "method": "bdev_nvme_attach_controller" 00:29:44.212 } 00:29:44.212 EOF 00:29:44.212 )") 00:29:44.212 09:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:44.212 [2024-11-20 09:20:21.003956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.003998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 09:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:44.212 09:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:44.212 09:20:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:44.212 "params": { 00:29:44.212 "name": "Nvme1", 00:29:44.212 "trtype": "tcp", 00:29:44.212 "traddr": "10.0.0.2", 00:29:44.212 "adrfam": "ipv4", 00:29:44.212 "trsvcid": "4420", 00:29:44.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.212 "hdgst": false, 00:29:44.212 "ddgst": false 00:29:44.212 }, 00:29:44.212 "method": "bdev_nvme_attach_controller" 00:29:44.212 }' 00:29:44.212 [2024-11-20 09:20:21.011882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.011920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.019881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.019910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.027875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.027894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.035860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.035880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.043859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.043878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.048340] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:29:44.212 [2024-11-20 09:20:21.048423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479576 ] 00:29:44.212 [2024-11-20 09:20:21.051858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.051887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.059859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.059879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.067861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.067881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.075859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.075879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.083860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.083880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.091860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.091880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.099861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.099880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.107863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.107884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.115861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.115880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.120565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.212 [2024-11-20 09:20:21.123863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.123883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.131911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.131945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.139898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.139943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.147862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.147882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.155862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.155881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.163860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.163880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.171862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.171881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.179861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.179880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.187861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.187881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.188025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.212 [2024-11-20 09:20:21.195861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.195880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.203895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.203925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.211909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.211944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.212 [2024-11-20 09:20:21.219905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.212 [2024-11-20 09:20:21.219939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.213 [2024-11-20 09:20:21.227906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.213 [2024-11-20 09:20:21.227943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.213 [2024-11-20 09:20:21.235949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.213 [2024-11-20 09:20:21.235988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.243911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.243948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.251914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.251952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.259878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.259900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.267908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.267942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.275902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.275939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.283902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.283938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.291867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.291887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.299866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.299886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.307870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.307894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.315868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.315892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.323868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.323892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.331866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.331887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.339984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.340010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.347884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.347906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.355878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.355898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.363876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.363896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.371862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.371896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.379861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.379880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.387865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.387885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.395863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.395884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.403862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.403883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.411861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.411882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.419861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.419880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.427860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.427879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.435879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.435901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.443864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.443885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.451863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.451883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.459863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.459883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.467860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.467879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.475864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.475885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.483867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.483891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.471 [2024-11-20 09:20:21.491900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.471 [2024-11-20 09:20:21.491925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 Running I/O for 5 seconds... 00:29:44.729 [2024-11-20 09:20:21.499877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.499919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.513191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.513218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.528933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.528960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.538270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.538319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.555035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.555062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.565662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.565700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.580982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.581007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.591098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.591122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.603361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.603389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.614711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.614735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.630146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.630171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.639752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.639776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.651975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.652002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.663062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.663086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.678089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.678129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.687850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.687875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.700217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.700242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.711446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.711473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.726352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.726393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.741888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.741914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.729 [2024-11-20 09:20:21.751444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.729 [2024-11-20 09:20:21.751470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.763775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.763802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.775125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.775149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.788373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.788400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.797596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.797622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.809779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.809803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.825399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.825426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.834983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.835009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.849629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.849669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.859701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.859726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.871390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.871424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.885039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.885064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.895008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.895032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.909999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.910023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.926041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.926067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.935946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.935984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.947903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.947927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.958722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.958746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.969688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.969713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.983409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.983436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:21.993336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:21.993361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:44.988 [2024-11-20 09:20:22.004967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:44.988 [2024-11-20 09:20:22.004991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.015205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.015246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.030524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.030550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.043180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.043206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.053176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.053200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.069236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.069260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.079197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.079221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.091379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.091405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.102487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.102521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.119114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.119140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.128970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.128996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.140989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.141014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.246 [2024-11-20 09:20:22.157564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.246 [2024-11-20 09:20:22.157606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.247 [2024-11-20 09:20:22.166973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.247 [2024-11-20 09:20:22.166998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.247 [2024-11-20 09:20:22.181514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.247 [2024-11-20 09:20:22.181541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.247 [2024-11-20 09:20:22.191631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.247 [2024-11-20 09:20:22.191670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.247 [2024-11-20 09:20:22.203833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.247 [2024-11-20 09:20:22.203858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.247 [2024-11-20 09:20:22.215086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.247 [2024-11-20 09:20:22.215110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.247 [2024-11-20 09:20:22.229257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.247 [2024-11-20 09:20:22.229297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.247 [2024-11-20 09:20:22.239097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.247 [2024-11-20 09:20:22.239122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.247 [2024-11-20 09:20:22.251648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.247 [2024-11-20 09:20:22.251673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.247 [2024-11-20 09:20:22.262867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.247 [2024-11-20 09:20:22.262892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.277575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.277617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.287299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.287331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.299047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.299072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.309810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.309834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.325377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.325403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.334933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.334965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.349880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.349904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.358965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.358989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.372881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.372905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.382689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.382713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.398141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.398166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.408091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.408116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.419889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.419913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.430580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.430621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.444748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.444774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.454342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.454368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.469559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.469585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.488414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.488440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.499286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.499316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 11423.00 IOPS, 89.24 MiB/s [2024-11-20T08:20:22.534Z] [2024-11-20 09:20:22.514503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.514530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.505 [2024-11-20 09:20:22.530351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.505 [2024-11-20 09:20:22.530379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.548709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.548735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.559260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.559286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.572594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.572621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.582029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.582055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.593824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.593849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.607942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.607968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.617599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.617626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.629475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.629502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.645066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.645093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.654701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.654728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.669555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.669583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.685842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.685868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.695268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.695324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.707223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.707250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.721209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.721250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.731212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.731237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.743454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.743481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.755937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.755964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.765458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.765485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.777127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.777151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:45.763 [2024-11-20 09:20:22.788068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:45.763 [2024-11-20 09:20:22.788093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.021 [2024-11-20 09:20:22.799075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.021 [2024-11-20 09:20:22.799101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.021 [2024-11-20 09:20:22.812121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.021 [2024-11-20 09:20:22.812149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.021 [2024-11-20 09:20:22.821959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.021 [2024-11-20 09:20:22.821985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.021 [2024-11-20 09:20:22.833712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.021 [2024-11-20 09:20:22.833736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.021 [2024-11-20 09:20:22.850142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.021 [2024-11-20 09:20:22.850181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.021 [2024-11-20 09:20:22.867918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.021 [2024-11-20 09:20:22.867944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.021 [2024-11-20 09:20:22.877709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.021 [2024-11-20 09:20:22.877734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.021 [2024-11-20 09:20:22.889773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.889798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:22.901022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.901047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:22.911698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.911722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:22.923705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.923731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:22.933242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.933268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:22.949217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.949242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:22.958450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.958477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:22.969925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.969950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:22.985726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.985753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:22.995586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:22.995631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:23.007234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:23.007261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:23.017224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:23.017248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:23.028549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:23.028592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.022 [2024-11-20 09:20:23.039069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.022 [2024-11-20 09:20:23.039094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.053821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.053847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.063121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.063147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.074639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.074680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.090397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.090423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.099872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.099896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.112182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.112207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.123639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.123678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.135004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.135030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.149745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.149785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.158887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.158911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.173164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.173190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.183165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.183190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.194759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.194785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.209493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.209520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.280 [2024-11-20 09:20:23.218785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.280 [2024-11-20 09:20:23.218810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.281 [2024-11-20 09:20:23.233507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.281 [2024-11-20 09:20:23.233534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.281 [2024-11-20 09:20:23.243063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.281 [2024-11-20 09:20:23.243088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.281 [2024-11-20 09:20:23.257128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.281 [2024-11-20 09:20:23.257153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.281 [2024-11-20 09:20:23.267190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.281 [2024-11-20 09:20:23.267216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.281 [2024-11-20 09:20:23.279237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.281 [2024-11-20 09:20:23.279263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.281 [2024-11-20 09:20:23.292869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.281 [2024-11-20 09:20:23.292896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.281 [2024-11-20 09:20:23.302629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.281 [2024-11-20 09:20:23.302657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.317646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.317685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.327563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.327606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.339852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.339879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.351226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.351251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.363505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.363533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.373020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.373045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.384868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.384895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.396110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.396135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.407110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.407150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.418019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.418044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.432320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.432345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.442553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.442580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.454273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.454320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.467378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.467406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.477010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.477045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.493015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.493040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 [2024-11-20 09:20:23.502669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.539 [2024-11-20 09:20:23.502694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.539 11527.00 IOPS, 90.05 MiB/s [2024-11-20T08:20:23.568Z] [2024-11-20 09:20:23.517193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.540 [2024-11-20 09:20:23.517218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.540 [2024-11-20 09:20:23.526684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.540 [2024-11-20 09:20:23.526709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.540 [2024-11-20 09:20:23.541111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.540 [2024-11-20 09:20:23.541136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.540 [2024-11-20 09:20:23.550759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.540 [2024-11-20 09:20:23.550786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.540 [2024-11-20 09:20:23.564795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.540 [2024-11-20 09:20:23.564823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.798 [2024-11-20 09:20:23.574683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.798 [2024-11-20 09:20:23.574708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.798 [2024-11-20 09:20:23.589025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.798 [2024-11-20 09:20:23.589050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.798 [2024-11-20 09:20:23.598614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.798 [2024-11-20 09:20:23.598655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.798 [2024-11-20 09:20:23.613567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.798 [2024-11-20 09:20:23.613608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.798 [2024-11-20 09:20:23.623182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.798 [2024-11-20 09:20:23.623206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.798 [2024-11-20 09:20:23.635506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.798 [2024-11-20 09:20:23.635532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.798 [2024-11-20 09:20:23.646563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.798 [2024-11-20 09:20:23.646602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.798 [2024-11-20 09:20:23.661777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.661802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.679755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.679779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.689282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.689312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.705315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.705340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.715649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.715683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.727539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.727575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.738368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.738394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.753141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.753167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.762998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.763022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.778176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.778216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.787895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.787919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.799883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.799906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.811445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.811472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:46.799 [2024-11-20 09:20:23.822506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:46.799 [2024-11-20 09:20:23.822534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.835715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.835741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.846154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.846179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.858086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.858111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.874208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.874235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.889901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.889927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.906050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.906075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.922057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.922096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.937378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.937418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.946959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.946982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.961960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.962006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.972048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.972072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.984142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.984166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:23.995280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:23.995311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:24.006387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:24.006414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:24.021170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:24.021197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:24.030886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:24.030911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:24.046743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:24.046782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:24.061241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:24.061266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.057 [2024-11-20 09:20:24.071073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.057 [2024-11-20 09:20:24.071097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.085007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.085033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.095377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.095403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.107389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.107416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.119999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.120024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.129457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.129498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.146218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.146243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.156175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.156200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.168424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.168451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.179336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.179377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.190371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.190397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.206195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.206221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.221825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.221865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.230883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.230908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.245859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.245882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.260824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.260849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.270153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.270178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.285500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.285525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.315 [2024-11-20 09:20:24.295577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.315 [2024-11-20 09:20:24.295617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.316 [2024-11-20 09:20:24.306834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.316 [2024-11-20 09:20:24.306858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.316 [2024-11-20 09:20:24.321736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.316 [2024-11-20 09:20:24.321760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.316 [2024-11-20 09:20:24.331417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.316 [2024-11-20 09:20:24.331444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.343637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.343661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.354352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.354378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.370310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.370351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.386181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.386206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.401903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.401929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.411735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.411759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.423617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.423658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.434062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.434087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.449015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.449040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.459157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.459183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.470725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.470749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.485413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.485441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.495341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.495381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.507580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.507622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 11531.67 IOPS, 90.09 MiB/s [2024-11-20T08:20:24.602Z] [2024-11-20 09:20:24.518741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.518766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.531728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.531770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.542063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.542087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.554029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.554054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.569543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.569571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.578958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.578984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.573 [2024-11-20 09:20:24.594171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.573 [2024-11-20 09:20:24.594195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.612261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.612301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.621532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.621559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.633343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.633383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.643776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.643803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.655578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.655627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.666585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.666625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.680712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.680738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.690632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.690671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.704702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.704726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.714280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.714328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.726428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.726455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.739314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.739340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.751166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.751192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.765309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.765338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.775350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.775377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.787809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.787835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.798796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.798837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.813765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.813792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.823584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.823626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.835541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.835567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:47.830 [2024-11-20 09:20:24.846318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:47.830 [2024-11-20 09:20:24.846345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.860839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.860866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.870124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.870150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.881888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.881923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.897785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.897812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.907285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.907336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.918507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.918534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.933039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.933066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.942193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.942220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.953710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.953735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.968941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.968968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.978644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.978671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:24.993462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:24.993489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:25.003102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:25.003129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:25.014751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:25.014777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:25.029179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:25.029205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:25.038675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:25.038700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:25.053856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:25.053882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:25.070000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:25.070026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:25.079756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:25.079782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.089 [2024-11-20 09:20:25.091944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.089 [2024-11-20 09:20:25.091971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.090 [2024-11-20 09:20:25.102822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.090 [2024-11-20 09:20:25.102847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.348 [2024-11-20 09:20:25.117436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.348 [2024-11-20 09:20:25.117470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.348 [2024-11-20 09:20:25.127558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.348 [2024-11-20 09:20:25.127601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.348 [2024-11-20 09:20:25.139555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.348 [2024-11-20 09:20:25.139601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.348 [2024-11-20 09:20:25.152908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.348 [2024-11-20 09:20:25.152933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.348 [2024-11-20 09:20:25.162824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.348 [2024-11-20 09:20:25.162849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.348 [2024-11-20 09:20:25.177123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.348 [2024-11-20 09:20:25.177148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.348 [2024-11-20 09:20:25.187973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.348 [2024-11-20 09:20:25.187997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.348 [2024-11-20 09:20:25.198791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.348 [2024-11-20 09:20:25.198816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.348 [2024-11-20 09:20:25.211532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.211559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.221646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.221687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.233417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.233444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.248238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.248264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.257532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.257559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.269614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.269640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.285772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.285799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.295380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.295407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.306933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.306959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.317997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.318021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.334521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.334549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.349177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.349213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.358853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.358878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.349 [2024-11-20 09:20:25.373768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.349 [2024-11-20 09:20:25.373795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.383229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.607 [2024-11-20 09:20:25.383254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.395368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.607 [2024-11-20 09:20:25.395410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.407568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.607 [2024-11-20 09:20:25.407595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.417493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.607 [2024-11-20 09:20:25.417520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.429472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.607 [2024-11-20 09:20:25.429498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.440111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.607 [2024-11-20 09:20:25.440150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.452086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.607 [2024-11-20 09:20:25.452111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.463049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.607 [2024-11-20 09:20:25.463074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.478274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.607 [2024-11-20 09:20:25.478323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.607 [2024-11-20 09:20:25.488420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.488446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.500530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.500557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.510973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.510997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 11548.50 IOPS, 90.22 MiB/s [2024-11-20T08:20:25.637Z] [2024-11-20 09:20:25.525175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.525200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.534971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.534995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.549040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.549066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.558723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.558747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.574990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.575017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.585008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.585034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.597312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.597365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.613420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.613449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.608 [2024-11-20 09:20:25.623370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.608 [2024-11-20 09:20:25.623397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.865 [2024-11-20 09:20:25.635116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.865 [2024-11-20 09:20:25.635145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.865 [2024-11-20 09:20:25.645867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.645892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.661584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.661626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.671273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.671324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.683238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.683265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.694586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.694626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.707830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.707870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.717869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.717895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.730396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.730423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.743675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.743701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.753178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.753201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.769599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.769624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.779145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.779185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.790941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.790965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.805502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.805529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.815780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.815821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.828605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.828632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.839667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.839691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.850482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.850523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.866342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.866383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:48.866 [2024-11-20 09:20:25.882710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:48.866 [2024-11-20 09:20:25.882750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:25.897849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:25.897889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:25.907079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:25.907106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:25.919319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:25.919366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:25.930430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:25.930456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:25.946042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:25.946068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:25.963829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:25.963853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:25.974052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:25.974077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:25.987951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:25.987977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:25.997702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:25.997726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.009539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.009566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.025119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.025144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.034596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.034621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.048591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.048630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.058461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.058486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.072901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.072926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.082384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.082411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.096897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.096921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.106612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.106652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.118631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.118671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.132494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.132521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.124 [2024-11-20 09:20:26.141533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.124 [2024-11-20 09:20:26.141559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.153789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.153816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.168214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.168240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.178044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.178068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.190016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.190042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.205371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.205397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.214862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.214888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.228886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.228910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.238913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.238938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.254193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.254217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.269678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.269711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.279494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.279520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.291432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.291459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.302534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.302561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.317577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.317618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.327183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.327207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.339596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.339622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.352102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.352127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.361951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.361974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.374097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.374121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.388917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.388942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.382 [2024-11-20 09:20:26.398220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.382 [2024-11-20 09:20:26.398244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.409802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.409826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.424543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.424571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.433952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.433976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.448740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.448765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.459359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.459384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.470407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.470434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.486363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.486388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.501390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.501424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.510923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.510947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 11539.40 IOPS, 90.15 MiB/s [2024-11-20T08:20:26.669Z] [2024-11-20 09:20:26.521590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.521617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 00:29:49.640 Latency(us) 00:29:49.640 [2024-11-20T08:20:26.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.640 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:49.640 Nvme1n1 : 5.01 11542.50 90.18 0.00 0.00 11075.12 2791.35 18932.62 00:29:49.640 [2024-11-20T08:20:26.669Z] =================================================================================================================== 00:29:49.640 [2024-11-20T08:20:26.669Z] Total : 11542.50 90.18 0.00 0.00 11075.12 2791.35 18932.62 00:29:49.640 [2024-11-20 09:20:26.527882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.527904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.535865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.535887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.543869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.543891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.551935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.551980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.559938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.559984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.567925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.567970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.575924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.575967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.583919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.583962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.591930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.591977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.599928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.599972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.607926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.607969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.615933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.615978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.623933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.623977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.631932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.631988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.639938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.639984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.647927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.647971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.655928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.655972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.640 [2024-11-20 09:20:26.663932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.640 [2024-11-20 09:20:26.663975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.898 [2024-11-20 09:20:26.679939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.898 [2024-11-20 09:20:26.679982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.898 [2024-11-20 09:20:26.687860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.899 [2024-11-20 09:20:26.687879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.899 [2024-11-20 09:20:26.695859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.899 [2024-11-20 09:20:26.695877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.899 [2024-11-20 09:20:26.703869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.899 [2024-11-20 09:20:26.703891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.899 [2024-11-20 09:20:26.711938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.899 [2024-11-20 09:20:26.711980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.899 [2024-11-20 09:20:26.719930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.899 [2024-11-20 09:20:26.719974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.899 [2024-11-20 09:20:26.727911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.899 [2024-11-20 09:20:26.727939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.899 [2024-11-20 09:20:26.735861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.899 [2024-11-20 09:20:26.735880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.899 [2024-11-20 09:20:26.743860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.899 [2024-11-20 09:20:26.743880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.899 [2024-11-20 09:20:26.751859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:49.899 [2024-11-20 09:20:26.751879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:49.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3479576) - No such process 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3479576 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:49.899 delay0 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.899 09:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:49.899 [2024-11-20 09:20:26.843160] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:58.001 Initializing NVMe Controllers 00:29:58.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:58.001 Initialization complete. Launching workers. 00:29:58.001 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 15945 00:29:58.001 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 16108, failed to submit 102 00:29:58.001 success 16018, unsuccessful 90, failed 0 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.001 rmmod nvme_tcp 00:29:58.001 rmmod nvme_fabrics 00:29:58.001 rmmod nvme_keyring 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3478349 ']' 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3478349 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3478349 ']' 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3478349 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:29:58.001 09:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:58.001 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3478349 00:29:58.001 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3478349' 00:29:58.002 killing process with pid 3478349 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3478349 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3478349 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.002 09:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.377 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.377 00:29:59.377 real 0m28.859s 00:29:59.377 user 0m40.164s 00:29:59.377 sys 0m10.274s 00:29:59.377 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:59.377 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:59.377 ************************************ 00:29:59.377 END TEST nvmf_zcopy 00:29:59.377 ************************************ 00:29:59.377 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:59.377 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:59.377 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:59.377 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:59.377 ************************************ 00:29:59.377 START TEST nvmf_nmic 00:29:59.377 ************************************ 00:29:59.377 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:59.636 * Looking for test storage... 00:29:59.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:59.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.636 --rc genhtml_branch_coverage=1 00:29:59.636 --rc genhtml_function_coverage=1 00:29:59.636 --rc genhtml_legend=1 00:29:59.636 --rc geninfo_all_blocks=1 00:29:59.636 --rc geninfo_unexecuted_blocks=1 00:29:59.636 00:29:59.636 ' 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:59.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.636 --rc genhtml_branch_coverage=1 00:29:59.636 --rc genhtml_function_coverage=1 00:29:59.636 --rc genhtml_legend=1 00:29:59.636 --rc geninfo_all_blocks=1 00:29:59.636 --rc geninfo_unexecuted_blocks=1 00:29:59.636 00:29:59.636 ' 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:59.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.636 --rc genhtml_branch_coverage=1 00:29:59.636 --rc genhtml_function_coverage=1 00:29:59.636 --rc genhtml_legend=1 00:29:59.636 --rc geninfo_all_blocks=1 00:29:59.636 --rc geninfo_unexecuted_blocks=1 00:29:59.636 00:29:59.636 ' 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:59.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.636 --rc genhtml_branch_coverage=1 00:29:59.636 --rc genhtml_function_coverage=1 00:29:59.636 --rc genhtml_legend=1 00:29:59.636 --rc geninfo_all_blocks=1 00:29:59.636 --rc geninfo_unexecuted_blocks=1 00:29:59.636 00:29:59.636 ' 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.636 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.637 09:20:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.538 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:01.797 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:01.797 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:01.797 Found net devices under 0000:09:00.0: cvl_0_0 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:01.797 Found net devices under 0000:09:00.1: cvl_0_1 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.797 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:30:01.798 00:30:01.798 --- 10.0.0.2 ping statistics --- 00:30:01.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.798 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:30:01.798 00:30:01.798 --- 10.0.0.1 ping statistics --- 00:30:01.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.798 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3483058 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3483058 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3483058 ']' 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:01.798 09:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:01.798 [2024-11-20 09:20:38.786206] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:01.798 [2024-11-20 09:20:38.787269] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:30:01.798 [2024-11-20 09:20:38.787371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.084 [2024-11-20 09:20:38.859717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.084 [2024-11-20 09:20:38.915307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.084 [2024-11-20 09:20:38.915374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.084 [2024-11-20 09:20:38.915388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.084 [2024-11-20 09:20:38.915413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.084 [2024-11-20 09:20:38.915424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.084 [2024-11-20 09:20:38.917002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.084 [2024-11-20 09:20:38.917068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.084 [2024-11-20 09:20:38.917138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.084 [2024-11-20 09:20:38.917141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.084 [2024-11-20 09:20:39.001947] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:02.084 [2024-11-20 09:20:39.002177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:02.084 [2024-11-20 09:20:39.002490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:02.084 [2024-11-20 09:20:39.003141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:02.084 [2024-11-20 09:20:39.003395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:02.084 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:02.084 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:30:02.084 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:02.084 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:02.084 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.084 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.084 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:02.084 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.084 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.085 [2024-11-20 09:20:39.053867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.085 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.085 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:02.085 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.085 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.376 Malloc0 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.376 [2024-11-20 09:20:39.126019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:02.376 test case1: single bdev can't be used in multiple subsystems 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.376 [2024-11-20 09:20:39.149790] bdev.c:8273:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:02.376 [2024-11-20 09:20:39.149818] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:02.376 [2024-11-20 09:20:39.149848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:02.376 request: 00:30:02.376 { 00:30:02.376 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:02.376 "namespace": { 00:30:02.376 "bdev_name": "Malloc0", 00:30:02.376 "no_auto_visible": false 00:30:02.376 }, 00:30:02.376 "method": "nvmf_subsystem_add_ns", 00:30:02.376 "req_id": 1 00:30:02.376 } 00:30:02.376 Got JSON-RPC error response 00:30:02.376 response: 00:30:02.376 { 00:30:02.376 "code": -32602, 00:30:02.376 "message": "Invalid parameters" 00:30:02.376 } 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:02.376 Adding namespace failed - expected result. 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:02.376 test case2: host connect to nvmf target in multiple paths 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:02.376 [2024-11-20 09:20:39.157873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:02.376 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:02.632 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:02.632 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:30:02.632 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:02.632 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:02.632 09:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:30:04.528 09:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:04.528 09:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:04.528 09:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:04.528 09:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:04.528 09:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:04.528 09:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:30:04.528 09:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:04.528 [global] 00:30:04.528 thread=1 00:30:04.528 invalidate=1 00:30:04.528 rw=write 00:30:04.528 time_based=1 00:30:04.528 runtime=1 00:30:04.528 ioengine=libaio 00:30:04.528 direct=1 00:30:04.528 bs=4096 00:30:04.528 iodepth=1 00:30:04.528 norandommap=0 00:30:04.528 numjobs=1 00:30:04.528 00:30:04.528 verify_dump=1 00:30:04.528 verify_backlog=512 00:30:04.528 verify_state_save=0 00:30:04.528 do_verify=1 00:30:04.528 verify=crc32c-intel 00:30:04.528 [job0] 00:30:04.528 filename=/dev/nvme0n1 00:30:04.528 Could not set queue depth (nvme0n1) 00:30:04.785 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:04.785 fio-3.35 00:30:04.785 Starting 1 thread 00:30:06.155 00:30:06.155 job0: (groupid=0, jobs=1): err= 0: pid=3483458: Wed Nov 20 09:20:42 2024 00:30:06.155 read: IOPS=22, BW=90.0KiB/s (92.2kB/s)(92.0KiB/1022msec) 00:30:06.155 slat (nsec): min=6032, max=32460, avg=24050.61, stdev=9827.27 00:30:06.155 clat (usec): min=40824, max=41039, avg=40957.60, stdev=47.42 00:30:06.155 lat (usec): min=40830, max=41052, avg=40981.65, stdev=48.00 00:30:06.155 clat percentiles (usec): 00:30:06.155 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:06.155 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:06.155 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:06.155 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:06.155 | 99.99th=[41157] 00:30:06.155 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:30:06.155 slat (nsec): min=5297, max=34115, avg=6478.84, stdev=1864.88 00:30:06.155 clat (usec): min=133, max=297, avg=146.06, stdev=12.39 00:30:06.155 lat (usec): min=140, max=332, avg=152.54, stdev=13.16 00:30:06.155 clat percentiles (usec): 00:30:06.155 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 139], 20.00th=[ 141], 00:30:06.155 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 143], 60.00th=[ 145], 00:30:06.155 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 163], 00:30:06.155 | 99.00th=[ 184], 99.50th=[ 235], 99.90th=[ 297], 99.95th=[ 297], 00:30:06.155 | 99.99th=[ 297] 00:30:06.155 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:06.155 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:06.155 lat (usec) : 250=95.33%, 500=0.37% 00:30:06.155 lat (msec) : 50=4.30% 00:30:06.155 cpu : usr=0.10%, sys=0.39%, ctx=535, majf=0, minf=1 00:30:06.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:06.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.155 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:06.155 00:30:06.155 Run status group 0 (all jobs): 00:30:06.155 READ: bw=90.0KiB/s (92.2kB/s), 90.0KiB/s-90.0KiB/s (92.2kB/s-92.2kB/s), io=92.0KiB (94.2kB), run=1022-1022msec 00:30:06.155 WRITE: bw=2004KiB/s (2052kB/s), 2004KiB/s-2004KiB/s (2052kB/s-2052kB/s), io=2048KiB (2097kB), run=1022-1022msec 00:30:06.155 00:30:06.155 Disk stats (read/write): 00:30:06.155 nvme0n1: ios=70/512, merge=0/0, ticks=842/70, in_queue=912, util=91.38% 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:06.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.155 09:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.155 rmmod nvme_tcp 00:30:06.155 rmmod nvme_fabrics 00:30:06.155 rmmod nvme_keyring 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3483058 ']' 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3483058 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3483058 ']' 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3483058 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3483058 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3483058' 00:30:06.155 killing process with pid 3483058 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3483058 00:30:06.155 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3483058 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.413 09:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.942 00:30:08.942 real 0m9.026s 00:30:08.942 user 0m16.638s 00:30:08.942 sys 0m3.315s 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.942 ************************************ 00:30:08.942 END TEST nvmf_nmic 00:30:08.942 ************************************ 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:08.942 ************************************ 00:30:08.942 START TEST nvmf_fio_target 00:30:08.942 ************************************ 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:08.942 * Looking for test storage... 00:30:08.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:08.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.942 --rc genhtml_branch_coverage=1 00:30:08.942 --rc genhtml_function_coverage=1 00:30:08.942 --rc genhtml_legend=1 00:30:08.942 --rc geninfo_all_blocks=1 00:30:08.942 --rc geninfo_unexecuted_blocks=1 00:30:08.942 00:30:08.942 ' 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:08.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.942 --rc genhtml_branch_coverage=1 00:30:08.942 --rc genhtml_function_coverage=1 00:30:08.942 --rc genhtml_legend=1 00:30:08.942 --rc geninfo_all_blocks=1 00:30:08.942 --rc geninfo_unexecuted_blocks=1 00:30:08.942 00:30:08.942 ' 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:08.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.942 --rc genhtml_branch_coverage=1 00:30:08.942 --rc genhtml_function_coverage=1 00:30:08.942 --rc genhtml_legend=1 00:30:08.942 --rc geninfo_all_blocks=1 00:30:08.942 --rc geninfo_unexecuted_blocks=1 00:30:08.942 00:30:08.942 ' 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:08.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.942 --rc genhtml_branch_coverage=1 00:30:08.942 --rc genhtml_function_coverage=1 00:30:08.942 --rc genhtml_legend=1 00:30:08.942 --rc geninfo_all_blocks=1 00:30:08.942 --rc geninfo_unexecuted_blocks=1 00:30:08.942 00:30:08.942 ' 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.942 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.943 09:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:10.846 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.846 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:10.846 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:10.846 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:10.847 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:10.847 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:10.847 Found net devices under 0000:09:00.0: cvl_0_0 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:10.847 Found net devices under 0000:09:00.1: cvl_0_1 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:10.847 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.105 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.105 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.105 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.105 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:30:11.105 00:30:11.105 --- 10.0.0.2 ping statistics --- 00:30:11.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.105 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:30:11.106 00:30:11.106 --- 10.0.0.1 ping statistics --- 00:30:11.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.106 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3485640 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3485640 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3485640 ']' 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:11.106 09:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:11.106 [2024-11-20 09:20:47.970385] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:11.106 [2024-11-20 09:20:47.971476] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:30:11.106 [2024-11-20 09:20:47.971540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.106 [2024-11-20 09:20:48.043506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.106 [2024-11-20 09:20:48.101485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.106 [2024-11-20 09:20:48.101535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.106 [2024-11-20 09:20:48.101564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.106 [2024-11-20 09:20:48.101576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.106 [2024-11-20 09:20:48.101586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.106 [2024-11-20 09:20:48.103184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.106 [2024-11-20 09:20:48.103216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.106 [2024-11-20 09:20:48.103269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.106 [2024-11-20 09:20:48.103272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.364 [2024-11-20 09:20:48.191572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:11.364 [2024-11-20 09:20:48.191840] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:11.364 [2024-11-20 09:20:48.192103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:11.364 [2024-11-20 09:20:48.192767] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:11.364 [2024-11-20 09:20:48.192996] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:11.364 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:11.364 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:30:11.364 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:11.364 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:11.364 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:11.364 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.364 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:11.623 [2024-11-20 09:20:48.548089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.623 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:12.189 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:12.189 09:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:12.446 09:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:12.446 09:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:12.704 09:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:12.704 09:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:12.961 09:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:12.961 09:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:13.219 09:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:13.477 09:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:13.477 09:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:14.041 09:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:14.041 09:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:14.298 09:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:14.298 09:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:14.556 09:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:14.813 09:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:14.813 09:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.070 09:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:15.070 09:20:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:15.327 09:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.584 [2024-11-20 09:20:52.424206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.584 09:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:15.841 09:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:16.097 09:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:16.354 09:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:16.354 09:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:30:16.354 09:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:16.354 09:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:30:16.354 09:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:30:16.354 09:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:30:18.250 09:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:18.250 09:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:18.250 09:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:18.250 09:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:30:18.250 09:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:18.250 09:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:30:18.250 09:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:18.250 [global] 00:30:18.250 thread=1 00:30:18.250 invalidate=1 00:30:18.250 rw=write 00:30:18.250 time_based=1 00:30:18.250 runtime=1 00:30:18.250 ioengine=libaio 00:30:18.250 direct=1 00:30:18.250 bs=4096 00:30:18.250 iodepth=1 00:30:18.250 norandommap=0 00:30:18.250 numjobs=1 00:30:18.250 00:30:18.250 verify_dump=1 00:30:18.250 verify_backlog=512 00:30:18.250 verify_state_save=0 00:30:18.250 do_verify=1 00:30:18.250 verify=crc32c-intel 00:30:18.250 [job0] 00:30:18.250 filename=/dev/nvme0n1 00:30:18.250 [job1] 00:30:18.250 filename=/dev/nvme0n2 00:30:18.250 [job2] 00:30:18.250 filename=/dev/nvme0n3 00:30:18.250 [job3] 00:30:18.250 filename=/dev/nvme0n4 00:30:18.507 Could not set queue depth (nvme0n1) 00:30:18.507 Could not set queue depth (nvme0n2) 00:30:18.507 Could not set queue depth (nvme0n3) 00:30:18.507 Could not set queue depth (nvme0n4) 00:30:18.507 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:18.507 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:18.507 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:18.507 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:18.507 fio-3.35 00:30:18.507 Starting 4 threads 00:30:19.879 00:30:19.879 job0: (groupid=0, jobs=1): err= 0: pid=3486699: Wed Nov 20 09:20:56 2024 00:30:19.879 read: IOPS=2000, BW=8000KiB/s (8192kB/s)(8224KiB/1028msec) 00:30:19.879 slat (nsec): min=4864, max=29306, avg=7025.46, stdev=2483.35 00:30:19.879 clat (usec): min=178, max=41565, avg=252.25, stdev=912.40 00:30:19.879 lat (usec): min=186, max=41577, avg=259.28, stdev=912.53 00:30:19.879 clat percentiles (usec): 00:30:19.879 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:30:19.879 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:30:19.879 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 265], 95.00th=[ 289], 00:30:19.879 | 99.00th=[ 343], 99.50th=[ 388], 99.90th=[ 889], 99.95th=[ 1106], 00:30:19.879 | 99.99th=[41681] 00:30:19.879 write: IOPS=2490, BW=9961KiB/s (10.2MB/s)(10.0MiB/1028msec); 0 zone resets 00:30:19.879 slat (usec): min=7, max=17883, avg=16.63, stdev=353.28 00:30:19.879 clat (usec): min=136, max=401, avg=171.72, stdev=27.11 00:30:19.879 lat (usec): min=146, max=18148, avg=188.34, stdev=356.19 00:30:19.879 clat percentiles (usec): 00:30:19.879 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:30:19.879 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:30:19.879 | 70.00th=[ 174], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 221], 00:30:19.879 | 99.00th=[ 255], 99.50th=[ 310], 99.90th=[ 400], 99.95th=[ 400], 00:30:19.879 | 99.99th=[ 404] 00:30:19.879 bw ( KiB/s): min= 9392, max=11088, per=44.56%, avg=10240.00, stdev=1199.25, samples=2 00:30:19.879 iops : min= 2348, max= 2772, avg=2560.00, stdev=299.81, samples=2 00:30:19.879 lat (usec) : 250=93.22%, 500=6.67%, 750=0.04%, 1000=0.02% 00:30:19.879 lat (msec) : 2=0.02%, 50=0.02% 00:30:19.879 cpu : usr=2.73%, sys=5.06%, ctx=4619, majf=0, minf=1 00:30:19.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.879 issued rwts: total=2056,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:19.879 job1: (groupid=0, jobs=1): err= 0: pid=3486703: Wed Nov 20 09:20:56 2024 00:30:19.879 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:30:19.879 slat (nsec): min=4896, max=30538, avg=8199.62, stdev=2873.53 00:30:19.879 clat (usec): min=210, max=40996, avg=419.72, stdev=2526.70 00:30:19.879 lat (usec): min=217, max=41010, avg=427.92, stdev=2527.11 00:30:19.879 clat percentiles (usec): 00:30:19.879 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:30:19.879 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:30:19.879 | 70.00th=[ 260], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 392], 00:30:19.879 | 99.00th=[ 537], 99.50th=[ 586], 99.90th=[41157], 99.95th=[41157], 00:30:19.879 | 99.99th=[41157] 00:30:19.879 write: IOPS=1617, BW=6470KiB/s (6625kB/s)(6476KiB/1001msec); 0 zone resets 00:30:19.879 slat (usec): min=6, max=17984, avg=20.12, stdev=446.76 00:30:19.879 clat (usec): min=151, max=315, avg=185.86, stdev=24.15 00:30:19.879 lat (usec): min=159, max=18145, avg=205.98, stdev=446.78 00:30:19.879 clat percentiles (usec): 00:30:19.879 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 163], 00:30:19.879 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 188], 00:30:19.879 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 231], 00:30:19.879 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 285], 99.95th=[ 314], 00:30:19.879 | 99.99th=[ 314] 00:30:19.879 bw ( KiB/s): min= 4096, max= 4096, per=17.82%, avg=4096.00, stdev= 0.00, samples=1 00:30:19.879 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:19.879 lat (usec) : 250=81.11%, 500=17.72%, 750=0.95%, 1000=0.03% 00:30:19.879 lat (msec) : 50=0.19% 00:30:19.879 cpu : usr=1.50%, sys=3.80%, ctx=3158, majf=0, minf=1 00:30:19.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.879 issued rwts: total=1536,1619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:19.879 job2: (groupid=0, jobs=1): err= 0: pid=3486704: Wed Nov 20 09:20:56 2024 00:30:19.879 read: IOPS=19, BW=79.2KiB/s (81.1kB/s)(80.0KiB/1010msec) 00:30:19.879 slat (nsec): min=9921, max=28898, avg=15654.55, stdev=3980.07 00:30:19.879 clat (usec): min=40900, max=41714, avg=41043.00, stdev=189.07 00:30:19.879 lat (usec): min=40915, max=41732, avg=41058.66, stdev=189.05 00:30:19.879 clat percentiles (usec): 00:30:19.879 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:19.879 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:19.879 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:19.879 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:30:19.879 | 99.99th=[41681] 00:30:19.879 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:30:19.879 slat (usec): min=8, max=18731, avg=50.21, stdev=827.24 00:30:19.879 clat (usec): min=191, max=504, avg=311.91, stdev=68.97 00:30:19.879 lat (usec): min=203, max=19152, avg=362.11, stdev=834.85 00:30:19.879 clat percentiles (usec): 00:30:19.879 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 231], 20.00th=[ 249], 00:30:19.879 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:30:19.879 | 70.00th=[ 334], 80.00th=[ 396], 90.00th=[ 408], 95.00th=[ 433], 00:30:19.879 | 99.00th=[ 469], 99.50th=[ 478], 99.90th=[ 506], 99.95th=[ 506], 00:30:19.879 | 99.99th=[ 506] 00:30:19.879 bw ( KiB/s): min= 4096, max= 4096, per=17.82%, avg=4096.00, stdev= 0.00, samples=1 00:30:19.879 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:19.879 lat (usec) : 250=20.49%, 500=75.56%, 750=0.19% 00:30:19.879 lat (msec) : 50=3.76% 00:30:19.879 cpu : usr=0.40%, sys=0.89%, ctx=534, majf=0, minf=1 00:30:19.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.879 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:19.879 job3: (groupid=0, jobs=1): err= 0: pid=3486705: Wed Nov 20 09:20:56 2024 00:30:19.879 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:30:19.879 slat (nsec): min=5702, max=38122, avg=7705.86, stdev=3001.66 00:30:19.879 clat (usec): min=206, max=42298, avg=688.32, stdev=4090.11 00:30:19.879 lat (usec): min=212, max=42311, avg=696.03, stdev=4090.55 00:30:19.879 clat percentiles (usec): 00:30:19.879 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 243], 00:30:19.879 | 30.00th=[ 247], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:30:19.879 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 420], 95.00th=[ 523], 00:30:19.879 | 99.00th=[ 1037], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:30:19.879 | 99.99th=[42206] 00:30:19.879 write: IOPS=1213, BW=4855KiB/s (4972kB/s)(4860KiB/1001msec); 0 zone resets 00:30:19.879 slat (nsec): min=7167, max=46240, avg=11146.36, stdev=5241.62 00:30:19.879 clat (usec): min=141, max=1046, avg=220.27, stdev=82.91 00:30:19.879 lat (usec): min=149, max=1057, avg=231.41, stdev=83.90 00:30:19.879 clat percentiles (usec): 00:30:19.879 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:30:19.879 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 186], 60.00th=[ 198], 00:30:19.879 | 70.00th=[ 233], 80.00th=[ 277], 90.00th=[ 347], 95.00th=[ 400], 00:30:19.879 | 99.00th=[ 441], 99.50th=[ 465], 99.90th=[ 1029], 99.95th=[ 1045], 00:30:19.879 | 99.99th=[ 1045] 00:30:19.879 bw ( KiB/s): min= 8192, max= 8192, per=35.65%, avg=8192.00, stdev= 0.00, samples=1 00:30:19.879 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:19.879 lat (usec) : 250=63.20%, 500=33.94%, 750=2.10%, 1000=0.18% 00:30:19.879 lat (msec) : 2=0.13%, 50=0.45% 00:30:19.879 cpu : usr=1.60%, sys=2.80%, ctx=2239, majf=0, minf=1 00:30:19.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.879 issued rwts: total=1024,1215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:19.879 00:30:19.879 Run status group 0 (all jobs): 00:30:19.879 READ: bw=17.6MiB/s (18.5MB/s), 79.2KiB/s-8000KiB/s (81.1kB/s-8192kB/s), io=18.1MiB (19.0MB), run=1001-1028msec 00:30:19.879 WRITE: bw=22.4MiB/s (23.5MB/s), 2028KiB/s-9961KiB/s (2076kB/s-10.2MB/s), io=23.1MiB (24.2MB), run=1001-1028msec 00:30:19.879 00:30:19.879 Disk stats (read/write): 00:30:19.879 nvme0n1: ios=1909/2048, merge=0/0, ticks=1360/320, in_queue=1680, util=93.59% 00:30:19.879 nvme0n2: ios=1079/1536, merge=0/0, ticks=988/271, in_queue=1259, util=97.66% 00:30:19.879 nvme0n3: ios=76/512, merge=0/0, ticks=1238/156, in_queue=1394, util=97.81% 00:30:19.879 nvme0n4: ios=1005/1024, merge=0/0, ticks=581/211, in_queue=792, util=90.63% 00:30:19.879 09:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:19.879 [global] 00:30:19.879 thread=1 00:30:19.879 invalidate=1 00:30:19.879 rw=randwrite 00:30:19.879 time_based=1 00:30:19.879 runtime=1 00:30:19.879 ioengine=libaio 00:30:19.879 direct=1 00:30:19.879 bs=4096 00:30:19.879 iodepth=1 00:30:19.879 norandommap=0 00:30:19.879 numjobs=1 00:30:19.879 00:30:19.879 verify_dump=1 00:30:19.879 verify_backlog=512 00:30:19.879 verify_state_save=0 00:30:19.879 do_verify=1 00:30:19.879 verify=crc32c-intel 00:30:19.879 [job0] 00:30:19.879 filename=/dev/nvme0n1 00:30:19.879 [job1] 00:30:19.879 filename=/dev/nvme0n2 00:30:19.879 [job2] 00:30:19.879 filename=/dev/nvme0n3 00:30:19.879 [job3] 00:30:19.879 filename=/dev/nvme0n4 00:30:19.879 Could not set queue depth (nvme0n1) 00:30:19.879 Could not set queue depth (nvme0n2) 00:30:19.879 Could not set queue depth (nvme0n3) 00:30:19.879 Could not set queue depth (nvme0n4) 00:30:20.137 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:20.137 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:20.137 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:20.137 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:20.137 fio-3.35 00:30:20.137 Starting 4 threads 00:30:21.172 00:30:21.172 job0: (groupid=0, jobs=1): err= 0: pid=3486927: Wed Nov 20 09:20:58 2024 00:30:21.172 read: IOPS=1304, BW=5219KiB/s (5344kB/s)(5224KiB/1001msec) 00:30:21.172 slat (nsec): min=4963, max=56124, avg=14576.65, stdev=10013.90 00:30:21.172 clat (usec): min=206, max=42171, avg=509.87, stdev=2999.57 00:30:21.172 lat (usec): min=213, max=42176, avg=524.44, stdev=2999.48 00:30:21.172 clat percentiles (usec): 00:30:21.172 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 229], 00:30:21.172 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 273], 60.00th=[ 281], 00:30:21.172 | 70.00th=[ 289], 80.00th=[ 322], 90.00th=[ 453], 95.00th=[ 478], 00:30:21.172 | 99.00th=[ 545], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:30:21.172 | 99.99th=[42206] 00:30:21.172 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:30:21.172 slat (nsec): min=6845, max=45708, avg=13507.89, stdev=4989.05 00:30:21.172 clat (usec): min=143, max=377, avg=184.47, stdev=34.07 00:30:21.172 lat (usec): min=153, max=392, avg=197.98, stdev=36.83 00:30:21.172 clat percentiles (usec): 00:30:21.172 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 151], 20.00th=[ 155], 00:30:21.172 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 184], 00:30:21.172 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 241], 00:30:21.172 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 375], 99.95th=[ 379], 00:30:21.172 | 99.99th=[ 379] 00:30:21.172 bw ( KiB/s): min= 8192, max= 8192, per=41.52%, avg=8192.00, stdev= 0.00, samples=1 00:30:21.172 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:21.172 lat (usec) : 250=70.48%, 500=28.29%, 750=0.99% 00:30:21.172 lat (msec) : 50=0.25% 00:30:21.172 cpu : usr=2.10%, sys=4.60%, ctx=2843, majf=0, minf=1 00:30:21.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.172 issued rwts: total=1306,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:21.172 job1: (groupid=0, jobs=1): err= 0: pid=3486930: Wed Nov 20 09:20:58 2024 00:30:21.172 read: IOPS=586, BW=2347KiB/s (2403kB/s)(2436KiB/1038msec) 00:30:21.172 slat (nsec): min=4985, max=51903, avg=12699.05, stdev=6497.51 00:30:21.172 clat (usec): min=196, max=42196, avg=1349.81, stdev=6644.42 00:30:21.172 lat (usec): min=207, max=42207, avg=1362.51, stdev=6645.19 00:30:21.172 clat percentiles (usec): 00:30:21.172 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:30:21.172 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 265], 00:30:21.172 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 334], 95.00th=[ 429], 00:30:21.172 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:21.172 | 99.99th=[42206] 00:30:21.172 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:30:21.172 slat (nsec): min=6509, max=57494, avg=11992.78, stdev=6006.73 00:30:21.172 clat (usec): min=132, max=407, avg=185.28, stdev=40.65 00:30:21.172 lat (usec): min=139, max=425, avg=197.28, stdev=44.24 00:30:21.172 clat percentiles (usec): 00:30:21.172 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:30:21.172 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 172], 60.00th=[ 202], 00:30:21.172 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 253], 00:30:21.172 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 379], 99.95th=[ 408], 00:30:21.172 | 99.99th=[ 408] 00:30:21.172 bw ( KiB/s): min= 4096, max= 4096, per=20.76%, avg=4096.00, stdev= 0.00, samples=2 00:30:21.172 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:30:21.172 lat (usec) : 250=79.79%, 500=19.17%, 750=0.06% 00:30:21.172 lat (msec) : 50=0.98% 00:30:21.172 cpu : usr=0.68%, sys=2.31%, ctx=1634, majf=0, minf=1 00:30:21.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.172 issued rwts: total=609,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:21.172 job2: (groupid=0, jobs=1): err= 0: pid=3486931: Wed Nov 20 09:20:58 2024 00:30:21.172 read: IOPS=1666, BW=6665KiB/s (6825kB/s)(6672KiB/1001msec) 00:30:21.172 slat (nsec): min=5250, max=58570, avg=13546.99, stdev=7755.96 00:30:21.172 clat (usec): min=203, max=41004, avg=334.35, stdev=1720.55 00:30:21.172 lat (usec): min=209, max=41021, avg=347.90, stdev=1720.68 00:30:21.172 clat percentiles (usec): 00:30:21.172 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:30:21.172 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 265], 00:30:21.172 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 351], 00:30:21.172 | 99.00th=[ 396], 99.50th=[ 494], 99.90th=[41157], 99.95th=[41157], 00:30:21.172 | 99.99th=[41157] 00:30:21.172 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:30:21.172 slat (nsec): min=7041, max=62948, avg=12815.19, stdev=5856.09 00:30:21.172 clat (usec): min=148, max=1264, avg=185.74, stdev=37.97 00:30:21.172 lat (usec): min=156, max=1282, avg=198.55, stdev=40.66 00:30:21.172 clat percentiles (usec): 00:30:21.172 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:30:21.172 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:30:21.172 | 70.00th=[ 190], 80.00th=[ 206], 90.00th=[ 221], 95.00th=[ 237], 00:30:21.172 | 99.00th=[ 297], 99.50th=[ 338], 99.90th=[ 437], 99.95th=[ 586], 00:30:21.172 | 99.99th=[ 1270] 00:30:21.172 bw ( KiB/s): min= 8192, max= 8192, per=41.52%, avg=8192.00, stdev= 0.00, samples=1 00:30:21.172 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:21.172 lat (usec) : 250=77.66%, 500=22.07%, 750=0.08%, 1000=0.05% 00:30:21.172 lat (msec) : 2=0.05%, 50=0.08% 00:30:21.172 cpu : usr=2.20%, sys=5.30%, ctx=3718, majf=0, minf=1 00:30:21.172 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.172 issued rwts: total=1668,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.172 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:21.172 job3: (groupid=0, jobs=1): err= 0: pid=3486932: Wed Nov 20 09:20:58 2024 00:30:21.172 read: IOPS=390, BW=1563KiB/s (1600kB/s)(1572KiB/1006msec) 00:30:21.172 slat (nsec): min=5101, max=58098, avg=19859.86, stdev=10290.83 00:30:21.172 clat (usec): min=200, max=42116, avg=2217.88, stdev=8553.80 00:30:21.172 lat (usec): min=206, max=42130, avg=2237.74, stdev=8552.91 00:30:21.173 clat percentiles (usec): 00:30:21.173 | 1.00th=[ 221], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 265], 00:30:21.173 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 322], 60.00th=[ 347], 00:30:21.173 | 70.00th=[ 412], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 611], 00:30:21.173 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:21.173 | 99.99th=[42206] 00:30:21.173 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:30:21.173 slat (nsec): min=7146, max=48793, avg=16072.97, stdev=5086.63 00:30:21.173 clat (usec): min=178, max=373, avg=220.69, stdev=23.35 00:30:21.173 lat (usec): min=193, max=388, avg=236.76, stdev=24.37 00:30:21.173 clat percentiles (usec): 00:30:21.173 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:30:21.173 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 219], 00:30:21.173 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 260], 00:30:21.173 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 375], 99.95th=[ 375], 00:30:21.173 | 99.99th=[ 375] 00:30:21.173 bw ( KiB/s): min= 4096, max= 4096, per=20.76%, avg=4096.00, stdev= 0.00, samples=1 00:30:21.173 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:21.173 lat (usec) : 250=57.24%, 500=38.78%, 750=1.88% 00:30:21.173 lat (msec) : 2=0.11%, 50=1.99% 00:30:21.173 cpu : usr=0.80%, sys=1.69%, ctx=905, majf=0, minf=1 00:30:21.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.173 issued rwts: total=393,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:21.173 00:30:21.173 Run status group 0 (all jobs): 00:30:21.173 READ: bw=15.0MiB/s (15.7MB/s), 1563KiB/s-6665KiB/s (1600kB/s-6825kB/s), io=15.5MiB (16.3MB), run=1001-1038msec 00:30:21.173 WRITE: bw=19.3MiB/s (20.2MB/s), 2036KiB/s-8184KiB/s (2085kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1038msec 00:30:21.173 00:30:21.173 Disk stats (read/write): 00:30:21.173 nvme0n1: ios=1211/1536, merge=0/0, ticks=1423/278, in_queue=1701, util=89.48% 00:30:21.173 nvme0n2: ios=630/1024, merge=0/0, ticks=1552/191, in_queue=1743, util=93.50% 00:30:21.173 nvme0n3: ios=1483/1536, merge=0/0, ticks=1264/282, in_queue=1546, util=98.64% 00:30:21.173 nvme0n4: ios=446/512, merge=0/0, ticks=769/106, in_queue=875, util=94.73% 00:30:21.173 09:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:21.443 [global] 00:30:21.443 thread=1 00:30:21.443 invalidate=1 00:30:21.443 rw=write 00:30:21.443 time_based=1 00:30:21.443 runtime=1 00:30:21.443 ioengine=libaio 00:30:21.443 direct=1 00:30:21.443 bs=4096 00:30:21.443 iodepth=128 00:30:21.443 norandommap=0 00:30:21.443 numjobs=1 00:30:21.443 00:30:21.443 verify_dump=1 00:30:21.443 verify_backlog=512 00:30:21.443 verify_state_save=0 00:30:21.443 do_verify=1 00:30:21.443 verify=crc32c-intel 00:30:21.443 [job0] 00:30:21.443 filename=/dev/nvme0n1 00:30:21.443 [job1] 00:30:21.443 filename=/dev/nvme0n2 00:30:21.443 [job2] 00:30:21.443 filename=/dev/nvme0n3 00:30:21.443 [job3] 00:30:21.443 filename=/dev/nvme0n4 00:30:21.443 Could not set queue depth (nvme0n1) 00:30:21.443 Could not set queue depth (nvme0n2) 00:30:21.443 Could not set queue depth (nvme0n3) 00:30:21.444 Could not set queue depth (nvme0n4) 00:30:21.444 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:21.444 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:21.444 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:21.444 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:21.444 fio-3.35 00:30:21.444 Starting 4 threads 00:30:22.814 00:30:22.814 job0: (groupid=0, jobs=1): err= 0: pid=3487165: Wed Nov 20 09:20:59 2024 00:30:22.814 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:30:22.814 slat (usec): min=2, max=6279, avg=79.78, stdev=439.69 00:30:22.814 clat (usec): min=3553, max=20113, avg=10728.93, stdev=2141.41 00:30:22.814 lat (usec): min=3563, max=20127, avg=10808.71, stdev=2162.14 00:30:22.814 clat percentiles (usec): 00:30:22.814 | 1.00th=[ 7177], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:22.814 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10683], 00:30:22.814 | 70.00th=[11207], 80.00th=[11994], 90.00th=[13960], 95.00th=[15008], 00:30:22.814 | 99.00th=[17957], 99.50th=[17957], 99.90th=[19268], 99.95th=[19530], 00:30:22.814 | 99.99th=[20055] 00:30:22.814 write: IOPS=5925, BW=23.1MiB/s (24.3MB/s)(23.2MiB/1002msec); 0 zone resets 00:30:22.814 slat (usec): min=3, max=22109, avg=82.37, stdev=554.29 00:30:22.814 clat (usec): min=1855, max=48414, avg=11200.84, stdev=4699.01 00:30:22.814 lat (usec): min=1874, max=48440, avg=11283.21, stdev=4730.61 00:30:22.814 clat percentiles (usec): 00:30:22.814 | 1.00th=[ 5342], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9110], 00:30:22.814 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:30:22.814 | 70.00th=[11076], 80.00th=[13173], 90.00th=[14353], 95.00th=[15795], 00:30:22.814 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:30:22.814 | 99.99th=[48497] 00:30:22.814 bw ( KiB/s): min=20480, max=26000, per=40.94%, avg=23240.00, stdev=3903.23, samples=2 00:30:22.814 iops : min= 5120, max= 6500, avg=5810.00, stdev=975.81, samples=2 00:30:22.814 lat (msec) : 2=0.05%, 4=0.35%, 10=49.43%, 20=48.51%, 50=1.67% 00:30:22.814 cpu : usr=6.49%, sys=13.39%, ctx=516, majf=0, minf=2 00:30:22.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:30:22.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:22.814 issued rwts: total=5632,5937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:22.814 job1: (groupid=0, jobs=1): err= 0: pid=3487166: Wed Nov 20 09:20:59 2024 00:30:22.814 read: IOPS=2687, BW=10.5MiB/s (11.0MB/s)(11.0MiB/1044msec) 00:30:22.814 slat (usec): min=2, max=58484, avg=185.13, stdev=1656.44 00:30:22.814 clat (usec): min=2261, max=96900, avg=25699.49, stdev=23182.61 00:30:22.814 lat (usec): min=2273, max=96903, avg=25884.62, stdev=23313.47 00:30:22.814 clat percentiles (usec): 00:30:22.814 | 1.00th=[ 4555], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10290], 00:30:22.814 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11469], 60.00th=[20317], 00:30:22.814 | 70.00th=[28181], 80.00th=[44303], 90.00th=[62653], 95.00th=[71828], 00:30:22.814 | 99.00th=[96994], 99.50th=[96994], 99.90th=[96994], 99.95th=[96994], 00:30:22.815 | 99.99th=[96994] 00:30:22.815 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1044msec); 0 zone resets 00:30:22.815 slat (usec): min=3, max=24493, avg=146.47, stdev=1308.97 00:30:22.815 clat (usec): min=1027, max=87136, avg=19562.79, stdev=19078.08 00:30:22.815 lat (usec): min=1031, max=87157, avg=19709.26, stdev=19255.78 00:30:22.815 clat percentiles (usec): 00:30:22.815 | 1.00th=[ 2474], 5.00th=[ 7111], 10.00th=[ 8848], 20.00th=[ 9503], 00:30:22.815 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11338], 60.00th=[11863], 00:30:22.815 | 70.00th=[13173], 80.00th=[20055], 90.00th=[58459], 95.00th=[65799], 00:30:22.815 | 99.00th=[71828], 99.50th=[71828], 99.90th=[84411], 99.95th=[86508], 00:30:22.815 | 99.99th=[87557] 00:30:22.815 bw ( KiB/s): min= 7888, max=16688, per=21.65%, avg=12288.00, stdev=6222.54, samples=2 00:30:22.815 iops : min= 1972, max= 4172, avg=3072.00, stdev=1555.63, samples=2 00:30:22.815 lat (msec) : 2=0.17%, 4=1.51%, 10=21.28%, 20=47.18%, 50=13.59% 00:30:22.815 lat (msec) : 100=16.26% 00:30:22.815 cpu : usr=2.97%, sys=4.89%, ctx=254, majf=0, minf=1 00:30:22.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:30:22.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:22.815 issued rwts: total=2806,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:22.815 job2: (groupid=0, jobs=1): err= 0: pid=3487167: Wed Nov 20 09:20:59 2024 00:30:22.815 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:30:22.815 slat (usec): min=3, max=13442, avg=159.51, stdev=1107.49 00:30:22.815 clat (usec): min=12153, max=36665, avg=21377.26, stdev=4727.57 00:30:22.815 lat (usec): min=12160, max=36700, avg=21536.76, stdev=4814.68 00:30:22.815 clat percentiles (usec): 00:30:22.815 | 1.00th=[12387], 5.00th=[13566], 10.00th=[16909], 20.00th=[17433], 00:30:22.815 | 30.00th=[17957], 40.00th=[19268], 50.00th=[20841], 60.00th=[22676], 00:30:22.815 | 70.00th=[23987], 80.00th=[25560], 90.00th=[27395], 95.00th=[30016], 00:30:22.815 | 99.00th=[33424], 99.50th=[33817], 99.90th=[35390], 99.95th=[36439], 00:30:22.815 | 99.99th=[36439] 00:30:22.815 write: IOPS=3210, BW=12.5MiB/s (13.1MB/s)(12.7MiB/1011msec); 0 zone resets 00:30:22.815 slat (usec): min=4, max=13838, avg=146.50, stdev=1091.75 00:30:22.815 clat (usec): min=5126, max=45012, avg=19069.84, stdev=6254.62 00:30:22.815 lat (usec): min=5134, max=45028, avg=19216.35, stdev=6344.71 00:30:22.815 clat percentiles (usec): 00:30:22.815 | 1.00th=[ 7242], 5.00th=[ 9765], 10.00th=[11600], 20.00th=[12911], 00:30:22.815 | 30.00th=[15401], 40.00th=[16188], 50.00th=[20317], 60.00th=[21890], 00:30:22.815 | 70.00th=[22676], 80.00th=[23462], 90.00th=[24773], 95.00th=[27657], 00:30:22.815 | 99.00th=[38536], 99.50th=[39060], 99.90th=[44827], 99.95th=[44827], 00:30:22.815 | 99.99th=[44827] 00:30:22.815 bw ( KiB/s): min=12288, max=12664, per=21.98%, avg=12476.00, stdev=265.87, samples=2 00:30:22.815 iops : min= 3072, max= 3166, avg=3119.00, stdev=66.47, samples=2 00:30:22.815 lat (msec) : 10=3.74%, 20=43.57%, 50=52.69% 00:30:22.815 cpu : usr=4.95%, sys=6.63%, ctx=142, majf=0, minf=1 00:30:22.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:30:22.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:22.815 issued rwts: total=3072,3246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:22.815 job3: (groupid=0, jobs=1): err= 0: pid=3487169: Wed Nov 20 09:20:59 2024 00:30:22.815 read: IOPS=2098, BW=8394KiB/s (8595kB/s)(8444KiB/1006msec) 00:30:22.815 slat (usec): min=3, max=20885, avg=238.75, stdev=1607.83 00:30:22.815 clat (usec): min=1876, max=66715, avg=29283.13, stdev=11351.35 00:30:22.815 lat (usec): min=11370, max=66728, avg=29521.88, stdev=11432.47 00:30:22.815 clat percentiles (usec): 00:30:22.815 | 1.00th=[11731], 5.00th=[16909], 10.00th=[19006], 20.00th=[20579], 00:30:22.815 | 30.00th=[22414], 40.00th=[24511], 50.00th=[26346], 60.00th=[29230], 00:30:22.815 | 70.00th=[30802], 80.00th=[35914], 90.00th=[45876], 95.00th=[52691], 00:30:22.815 | 99.00th=[64750], 99.50th=[66847], 99.90th=[66847], 99.95th=[66847], 00:30:22.815 | 99.99th=[66847] 00:30:22.815 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:30:22.815 slat (usec): min=4, max=26737, avg=184.78, stdev=1449.58 00:30:22.815 clat (usec): min=9637, max=66501, avg=25441.41, stdev=8513.67 00:30:22.815 lat (usec): min=9646, max=66549, avg=25626.19, stdev=8662.58 00:30:22.815 clat percentiles (usec): 00:30:22.815 | 1.00th=[12518], 5.00th=[15139], 10.00th=[15795], 20.00th=[18220], 00:30:22.815 | 30.00th=[20841], 40.00th=[22414], 50.00th=[23200], 60.00th=[24249], 00:30:22.815 | 70.00th=[27919], 80.00th=[34341], 90.00th=[39060], 95.00th=[40633], 00:30:22.815 | 99.00th=[49021], 99.50th=[49021], 99.90th=[56361], 99.95th=[57934], 00:30:22.815 | 99.99th=[66323] 00:30:22.815 bw ( KiB/s): min= 7672, max=12288, per=17.58%, avg=9980.00, stdev=3264.00, samples=2 00:30:22.815 iops : min= 1918, max= 3072, avg=2495.00, stdev=816.00, samples=2 00:30:22.815 lat (msec) : 2=0.02%, 10=0.13%, 20=19.46%, 50=77.07%, 100=3.32% 00:30:22.815 cpu : usr=3.08%, sys=5.67%, ctx=111, majf=0, minf=2 00:30:22.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:22.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:22.815 issued rwts: total=2111,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:22.815 00:30:22.815 Run status group 0 (all jobs): 00:30:22.815 READ: bw=51.0MiB/s (53.4MB/s), 8394KiB/s-22.0MiB/s (8595kB/s-23.0MB/s), io=53.2MiB (55.8MB), run=1002-1044msec 00:30:22.815 WRITE: bw=55.4MiB/s (58.1MB/s), 9.94MiB/s-23.1MiB/s (10.4MB/s-24.3MB/s), io=57.9MiB (60.7MB), run=1002-1044msec 00:30:22.815 00:30:22.815 Disk stats (read/write): 00:30:22.815 nvme0n1: ios=4658/4992, merge=0/0, ticks=18575/22202, in_queue=40777, util=87.17% 00:30:22.815 nvme0n2: ios=2609/2927, merge=0/0, ticks=17613/19998, in_queue=37611, util=89.44% 00:30:22.815 nvme0n3: ios=2617/2634, merge=0/0, ticks=36327/28917, in_queue=65244, util=95.32% 00:30:22.815 nvme0n4: ios=2022/2048, merge=0/0, ticks=29078/21954, in_queue=51032, util=96.02% 00:30:22.815 09:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:22.815 [global] 00:30:22.815 thread=1 00:30:22.815 invalidate=1 00:30:22.815 rw=randwrite 00:30:22.815 time_based=1 00:30:22.815 runtime=1 00:30:22.815 ioengine=libaio 00:30:22.815 direct=1 00:30:22.815 bs=4096 00:30:22.815 iodepth=128 00:30:22.815 norandommap=0 00:30:22.815 numjobs=1 00:30:22.815 00:30:22.815 verify_dump=1 00:30:22.815 verify_backlog=512 00:30:22.815 verify_state_save=0 00:30:22.815 do_verify=1 00:30:22.815 verify=crc32c-intel 00:30:22.815 [job0] 00:30:22.815 filename=/dev/nvme0n1 00:30:22.815 [job1] 00:30:22.815 filename=/dev/nvme0n2 00:30:22.815 [job2] 00:30:22.815 filename=/dev/nvme0n3 00:30:22.815 [job3] 00:30:22.815 filename=/dev/nvme0n4 00:30:22.815 Could not set queue depth (nvme0n1) 00:30:22.815 Could not set queue depth (nvme0n2) 00:30:22.815 Could not set queue depth (nvme0n3) 00:30:22.815 Could not set queue depth (nvme0n4) 00:30:23.090 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:23.090 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:23.090 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:23.090 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:23.090 fio-3.35 00:30:23.090 Starting 4 threads 00:30:24.463 00:30:24.463 job0: (groupid=0, jobs=1): err= 0: pid=3487395: Wed Nov 20 09:21:01 2024 00:30:24.463 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:30:24.463 slat (usec): min=2, max=12460, avg=109.43, stdev=776.64 00:30:24.463 clat (usec): min=5172, max=39031, avg=15156.44, stdev=4585.86 00:30:24.463 lat (usec): min=5178, max=39044, avg=15265.87, stdev=4632.51 00:30:24.463 clat percentiles (usec): 00:30:24.463 | 1.00th=[ 5735], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11994], 00:30:24.463 | 30.00th=[12780], 40.00th=[14091], 50.00th=[14615], 60.00th=[15401], 00:30:24.463 | 70.00th=[16909], 80.00th=[17957], 90.00th=[18744], 95.00th=[21627], 00:30:24.463 | 99.00th=[34866], 99.50th=[36439], 99.90th=[36439], 99.95th=[37487], 00:30:24.463 | 99.99th=[39060] 00:30:24.463 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:30:24.463 slat (usec): min=3, max=18521, avg=138.51, stdev=958.31 00:30:24.463 clat (usec): min=1768, max=74375, avg=17904.25, stdev=12578.74 00:30:24.463 lat (usec): min=5306, max=74383, avg=18042.76, stdev=12677.21 00:30:24.463 clat percentiles (usec): 00:30:24.463 | 1.00th=[ 5669], 5.00th=[ 7373], 10.00th=[ 9634], 20.00th=[10945], 00:30:24.463 | 30.00th=[12256], 40.00th=[13960], 50.00th=[14615], 60.00th=[15926], 00:30:24.463 | 70.00th=[17695], 80.00th=[18744], 90.00th=[26346], 95.00th=[50070], 00:30:24.463 | 99.00th=[71828], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:30:24.463 | 99.99th=[73925] 00:30:24.463 bw ( KiB/s): min=12032, max=19719, per=25.60%, avg=15875.50, stdev=5435.53, samples=2 00:30:24.463 iops : min= 3008, max= 4929, avg=3968.50, stdev=1358.35, samples=2 00:30:24.463 lat (msec) : 2=0.01%, 10=8.92%, 20=79.09%, 50=9.28%, 100=2.70% 00:30:24.463 cpu : usr=3.78%, sys=5.27%, ctx=219, majf=0, minf=1 00:30:24.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:24.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:24.463 issued rwts: total=3584,4092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:24.463 job1: (groupid=0, jobs=1): err= 0: pid=3487401: Wed Nov 20 09:21:01 2024 00:30:24.463 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:30:24.463 slat (usec): min=2, max=17594, avg=139.72, stdev=993.38 00:30:24.463 clat (usec): min=5770, max=62165, avg=17933.41, stdev=10034.06 00:30:24.463 lat (usec): min=5775, max=62210, avg=18073.13, stdev=10089.24 00:30:24.463 clat percentiles (usec): 00:30:24.463 | 1.00th=[ 6194], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11207], 00:30:24.463 | 30.00th=[11469], 40.00th=[12387], 50.00th=[13304], 60.00th=[16450], 00:30:24.463 | 70.00th=[19792], 80.00th=[25822], 90.00th=[30540], 95.00th=[39060], 00:30:24.463 | 99.00th=[55313], 99.50th=[55313], 99.90th=[58459], 99.95th=[61080], 00:30:24.463 | 99.99th=[62129] 00:30:24.463 write: IOPS=3666, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1012msec); 0 zone resets 00:30:24.463 slat (usec): min=3, max=22354, avg=129.28, stdev=932.12 00:30:24.463 clat (usec): min=1504, max=60803, avg=17173.27, stdev=10807.72 00:30:24.463 lat (usec): min=1542, max=60821, avg=17302.55, stdev=10889.55 00:30:24.463 clat percentiles (usec): 00:30:24.463 | 1.00th=[ 4752], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10552], 00:30:24.463 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11863], 60.00th=[13042], 00:30:24.463 | 70.00th=[15533], 80.00th=[25822], 90.00th=[34866], 95.00th=[42206], 00:30:24.463 | 99.00th=[50594], 99.50th=[50594], 99.90th=[57934], 99.95th=[58459], 00:30:24.463 | 99.99th=[60556] 00:30:24.463 bw ( KiB/s): min=12288, max=16384, per=23.12%, avg=14336.00, stdev=2896.31, samples=2 00:30:24.463 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:30:24.463 lat (msec) : 2=0.03%, 10=10.56%, 20=62.90%, 50=24.71%, 100=1.81% 00:30:24.463 cpu : usr=2.18%, sys=3.56%, ctx=368, majf=0, minf=1 00:30:24.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:30:24.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:24.463 issued rwts: total=3584,3710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:24.463 job2: (groupid=0, jobs=1): err= 0: pid=3487418: Wed Nov 20 09:21:01 2024 00:30:24.463 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:30:24.463 slat (usec): min=2, max=20463, avg=116.67, stdev=769.89 00:30:24.463 clat (usec): min=3317, max=41938, avg=14651.02, stdev=5169.05 00:30:24.463 lat (usec): min=3330, max=41942, avg=14767.69, stdev=5207.37 00:30:24.463 clat percentiles (usec): 00:30:24.463 | 1.00th=[ 3392], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11469], 00:30:24.463 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13698], 60.00th=[14353], 00:30:24.463 | 70.00th=[15401], 80.00th=[16909], 90.00th=[20055], 95.00th=[24249], 00:30:24.463 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:30:24.463 | 99.99th=[41681] 00:30:24.463 write: IOPS=3720, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1006msec); 0 zone resets 00:30:24.463 slat (usec): min=3, max=17391, avg=146.29, stdev=850.00 00:30:24.463 clat (usec): min=3342, max=84014, avg=19793.11, stdev=15765.53 00:30:24.463 lat (usec): min=3352, max=84029, avg=19939.40, stdev=15875.94 00:30:24.463 clat percentiles (usec): 00:30:24.463 | 1.00th=[ 5276], 5.00th=[10028], 10.00th=[11469], 20.00th=[12256], 00:30:24.463 | 30.00th=[12649], 40.00th=[13566], 50.00th=[14091], 60.00th=[15139], 00:30:24.463 | 70.00th=[15926], 80.00th=[20317], 90.00th=[40109], 95.00th=[62653], 00:30:24.463 | 99.00th=[81265], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:30:24.463 | 99.99th=[84411] 00:30:24.463 bw ( KiB/s): min=14200, max=14728, per=23.33%, avg=14464.00, stdev=373.35, samples=2 00:30:24.463 iops : min= 3550, max= 3682, avg=3616.00, stdev=93.34, samples=2 00:30:24.463 lat (msec) : 4=1.13%, 10=4.35%, 20=78.64%, 50=12.13%, 100=3.74% 00:30:24.463 cpu : usr=3.98%, sys=4.48%, ctx=307, majf=0, minf=1 00:30:24.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:30:24.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:24.463 issued rwts: total=3584,3743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:24.463 job3: (groupid=0, jobs=1): err= 0: pid=3487430: Wed Nov 20 09:21:01 2024 00:30:24.463 read: IOPS=4256, BW=16.6MiB/s (17.4MB/s)(17.3MiB/1042msec) 00:30:24.463 slat (usec): min=2, max=14507, avg=109.46, stdev=711.54 00:30:24.463 clat (usec): min=6698, max=53402, avg=14807.04, stdev=7039.25 00:30:24.463 lat (usec): min=6703, max=57054, avg=14916.50, stdev=7068.48 00:30:24.463 clat percentiles (usec): 00:30:24.463 | 1.00th=[ 8291], 5.00th=[10028], 10.00th=[10814], 20.00th=[11863], 00:30:24.463 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13173], 60.00th=[13566], 00:30:24.463 | 70.00th=[13960], 80.00th=[14877], 90.00th=[17957], 95.00th=[23987], 00:30:24.463 | 99.00th=[49021], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:30:24.463 | 99.99th=[53216] 00:30:24.463 write: IOPS=4422, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1042msec); 0 zone resets 00:30:24.463 slat (usec): min=3, max=14537, avg=105.86, stdev=704.67 00:30:24.463 clat (usec): min=4973, max=40712, avg=14316.86, stdev=6054.12 00:30:24.463 lat (usec): min=4980, max=40722, avg=14422.72, stdev=6084.71 00:30:24.463 clat percentiles (usec): 00:30:24.463 | 1.00th=[ 6587], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[11207], 00:30:24.463 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[13042], 00:30:24.463 | 70.00th=[13566], 80.00th=[14484], 90.00th=[20841], 95.00th=[30802], 00:30:24.463 | 99.00th=[38536], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:30:24.463 | 99.99th=[40633] 00:30:24.463 bw ( KiB/s): min=18112, max=18752, per=29.73%, avg=18432.00, stdev=452.55, samples=2 00:30:24.463 iops : min= 4528, max= 4688, avg=4608.00, stdev=113.14, samples=2 00:30:24.463 lat (msec) : 10=7.21%, 20=82.99%, 50=9.33%, 100=0.46% 00:30:24.463 cpu : usr=3.65%, sys=4.61%, ctx=374, majf=0, minf=1 00:30:24.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:30:24.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:24.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:24.464 issued rwts: total=4435,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:24.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:24.464 00:30:24.464 Run status group 0 (all jobs): 00:30:24.464 READ: bw=56.9MiB/s (59.7MB/s), 13.8MiB/s-16.6MiB/s (14.5MB/s-17.4MB/s), io=59.3MiB (62.2MB), run=1006-1042msec 00:30:24.464 WRITE: bw=60.6MiB/s (63.5MB/s), 14.3MiB/s-17.3MiB/s (15.0MB/s-18.1MB/s), io=63.1MiB (66.2MB), run=1006-1042msec 00:30:24.464 00:30:24.464 Disk stats (read/write): 00:30:24.464 nvme0n1: ios=3105/3556, merge=0/0, ticks=32676/40821, in_queue=73497, util=99.50% 00:30:24.464 nvme0n2: ios=2916/3072, merge=0/0, ticks=16340/15068, in_queue=31408, util=98.98% 00:30:24.464 nvme0n3: ios=2783/3072, merge=0/0, ticks=18564/27385, in_queue=45949, util=89.56% 00:30:24.464 nvme0n4: ios=3641/4012, merge=0/0, ticks=23096/22013, in_queue=45109, util=97.58% 00:30:24.464 09:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:24.464 09:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3487697 00:30:24.464 09:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:24.464 09:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:24.464 [global] 00:30:24.464 thread=1 00:30:24.464 invalidate=1 00:30:24.464 rw=read 00:30:24.464 time_based=1 00:30:24.464 runtime=10 00:30:24.464 ioengine=libaio 00:30:24.464 direct=1 00:30:24.464 bs=4096 00:30:24.464 iodepth=1 00:30:24.464 norandommap=1 00:30:24.464 numjobs=1 00:30:24.464 00:30:24.464 [job0] 00:30:24.464 filename=/dev/nvme0n1 00:30:24.464 [job1] 00:30:24.464 filename=/dev/nvme0n2 00:30:24.464 [job2] 00:30:24.464 filename=/dev/nvme0n3 00:30:24.464 [job3] 00:30:24.464 filename=/dev/nvme0n4 00:30:24.464 Could not set queue depth (nvme0n1) 00:30:24.464 Could not set queue depth (nvme0n2) 00:30:24.464 Could not set queue depth (nvme0n3) 00:30:24.464 Could not set queue depth (nvme0n4) 00:30:24.464 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:24.464 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:24.464 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:24.464 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:24.464 fio-3.35 00:30:24.464 Starting 4 threads 00:30:27.740 09:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:27.740 09:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:27.740 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=294912, buflen=4096 00:30:27.740 fio: pid=3487831, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:27.740 09:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:27.740 09:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:27.998 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=21487616, buflen=4096 00:30:27.998 fio: pid=3487830, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:28.255 09:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:28.255 09:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:28.255 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=360448, buflen=4096 00:30:28.255 fio: pid=3487819, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:28.512 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=43184128, buflen=4096 00:30:28.512 fio: pid=3487820, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:28.512 09:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:28.512 09:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:28.512 00:30:28.512 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3487819: Wed Nov 20 09:21:05 2024 00:30:28.512 read: IOPS=25, BW=99.7KiB/s (102kB/s)(352KiB/3531msec) 00:30:28.512 slat (usec): min=8, max=6902, avg=160.54, stdev=955.11 00:30:28.512 clat (usec): min=285, max=42098, avg=39686.18, stdev=8655.49 00:30:28.512 lat (usec): min=301, max=49001, avg=39848.16, stdev=8747.95 00:30:28.512 clat percentiles (usec): 00:30:28.512 | 1.00th=[ 285], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:28.512 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:30:28.512 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:28.512 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:28.512 | 99.99th=[42206] 00:30:28.512 bw ( KiB/s): min= 96, max= 104, per=0.60%, avg=101.33, stdev= 4.13, samples=6 00:30:28.512 iops : min= 24, max= 26, avg=25.33, stdev= 1.03, samples=6 00:30:28.512 lat (usec) : 500=4.49% 00:30:28.512 lat (msec) : 50=94.38% 00:30:28.512 cpu : usr=0.06%, sys=0.00%, ctx=93, majf=0, minf=2 00:30:28.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.512 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.513 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:28.513 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3487820: Wed Nov 20 09:21:05 2024 00:30:28.513 read: IOPS=2771, BW=10.8MiB/s (11.3MB/s)(41.2MiB/3805msec) 00:30:28.513 slat (usec): min=4, max=11737, avg=12.04, stdev=227.01 00:30:28.513 clat (usec): min=184, max=42071, avg=346.27, stdev=2103.10 00:30:28.513 lat (usec): min=189, max=42091, avg=358.31, stdev=2115.92 00:30:28.513 clat percentiles (usec): 00:30:28.513 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 208], 00:30:28.513 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 241], 00:30:28.513 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:30:28.513 | 99.00th=[ 449], 99.50th=[ 510], 99.90th=[41157], 99.95th=[42206], 00:30:28.513 | 99.99th=[42206] 00:30:28.513 bw ( KiB/s): min= 96, max=16903, per=62.67%, avg=10507.29, stdev=6917.10, samples=7 00:30:28.513 iops : min= 24, max= 4225, avg=2626.71, stdev=1729.16, samples=7 00:30:28.513 lat (usec) : 250=77.15%, 500=22.25%, 750=0.28% 00:30:28.513 lat (msec) : 2=0.01%, 10=0.01%, 20=0.02%, 50=0.27% 00:30:28.513 cpu : usr=0.63%, sys=2.44%, ctx=10550, majf=0, minf=2 00:30:28.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.513 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.513 issued rwts: total=10544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:28.513 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3487830: Wed Nov 20 09:21:05 2024 00:30:28.513 read: IOPS=1617, BW=6471KiB/s (6626kB/s)(20.5MiB/3243msec) 00:30:28.513 slat (usec): min=4, max=11892, avg=14.46, stdev=164.12 00:30:28.513 clat (usec): min=197, max=42105, avg=596.12, stdev=3707.00 00:30:28.513 lat (usec): min=208, max=52968, avg=610.58, stdev=3735.39 00:30:28.513 clat percentiles (usec): 00:30:28.513 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 235], 00:30:28.513 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:30:28.513 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:30:28.513 | 99.00th=[ 412], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:30:28.513 | 99.99th=[42206] 00:30:28.513 bw ( KiB/s): min= 96, max=14336, per=41.67%, avg=6988.00, stdev=6660.82, samples=6 00:30:28.513 iops : min= 24, max= 3584, avg=1747.00, stdev=1665.21, samples=6 00:30:28.513 lat (usec) : 250=39.91%, 500=59.18%, 750=0.06% 00:30:28.513 lat (msec) : 50=0.84% 00:30:28.513 cpu : usr=0.80%, sys=3.02%, ctx=5250, majf=0, minf=1 00:30:28.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.513 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.513 issued rwts: total=5247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:28.513 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3487831: Wed Nov 20 09:21:05 2024 00:30:28.513 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2935msec) 00:30:28.513 slat (nsec): min=7978, max=33546, avg=16486.93, stdev=6906.28 00:30:28.513 clat (usec): min=366, max=41042, avg=40409.25, stdev=4785.92 00:30:28.513 lat (usec): min=392, max=41056, avg=40425.77, stdev=4784.72 00:30:28.513 clat percentiles (usec): 00:30:28.513 | 1.00th=[ 367], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:28.513 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:28.513 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:28.513 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:28.513 | 99.99th=[41157] 00:30:28.513 bw ( KiB/s): min= 96, max= 104, per=0.59%, avg=99.20, stdev= 4.38, samples=5 00:30:28.513 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:30:28.513 lat (usec) : 500=1.37% 00:30:28.513 lat (msec) : 50=97.26% 00:30:28.513 cpu : usr=0.07%, sys=0.00%, ctx=73, majf=0, minf=1 00:30:28.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.513 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.513 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:28.513 00:30:28.513 Run status group 0 (all jobs): 00:30:28.513 READ: bw=16.4MiB/s (17.2MB/s), 98.1KiB/s-10.8MiB/s (100kB/s-11.3MB/s), io=62.3MiB (65.3MB), run=2935-3805msec 00:30:28.513 00:30:28.513 Disk stats (read/write): 00:30:28.513 nvme0n1: ios=84/0, merge=0/0, ticks=3326/0, in_queue=3326, util=95.74% 00:30:28.513 nvme0n2: ios=9647/0, merge=0/0, ticks=4466/0, in_queue=4466, util=98.45% 00:30:28.513 nvme0n3: ios=5290/0, merge=0/0, ticks=3962/0, in_queue=3962, util=98.88% 00:30:28.513 nvme0n4: ios=70/0, merge=0/0, ticks=2829/0, in_queue=2829, util=96.71% 00:30:28.772 09:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:28.772 09:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:29.029 09:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:29.029 09:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:29.286 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:29.286 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:29.544 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:29.544 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:29.801 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:29.801 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3487697 00:30:29.801 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:29.801 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:30.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:30.058 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:30.058 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:30:30.058 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:30.058 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:30.058 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:30.058 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:30.058 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:30:30.058 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:30.058 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:30.059 nvmf hotplug test: fio failed as expected 00:30:30.059 09:21:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:30.317 rmmod nvme_tcp 00:30:30.317 rmmod nvme_fabrics 00:30:30.317 rmmod nvme_keyring 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3485640 ']' 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3485640 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3485640 ']' 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3485640 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:30.317 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3485640 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3485640' 00:30:30.573 killing process with pid 3485640 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3485640 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3485640 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.573 09:21:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:33.101 00:30:33.101 real 0m24.167s 00:30:33.101 user 1m8.229s 00:30:33.101 sys 0m10.241s 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:33.101 ************************************ 00:30:33.101 END TEST nvmf_fio_target 00:30:33.101 ************************************ 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:33.101 ************************************ 00:30:33.101 START TEST nvmf_bdevio 00:30:33.101 ************************************ 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:33.101 * Looking for test storage... 00:30:33.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:33.101 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:33.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.102 --rc genhtml_branch_coverage=1 00:30:33.102 --rc genhtml_function_coverage=1 00:30:33.102 --rc genhtml_legend=1 00:30:33.102 --rc geninfo_all_blocks=1 00:30:33.102 --rc geninfo_unexecuted_blocks=1 00:30:33.102 00:30:33.102 ' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:33.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.102 --rc genhtml_branch_coverage=1 00:30:33.102 --rc genhtml_function_coverage=1 00:30:33.102 --rc genhtml_legend=1 00:30:33.102 --rc geninfo_all_blocks=1 00:30:33.102 --rc geninfo_unexecuted_blocks=1 00:30:33.102 00:30:33.102 ' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:33.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.102 --rc genhtml_branch_coverage=1 00:30:33.102 --rc genhtml_function_coverage=1 00:30:33.102 --rc genhtml_legend=1 00:30:33.102 --rc geninfo_all_blocks=1 00:30:33.102 --rc geninfo_unexecuted_blocks=1 00:30:33.102 00:30:33.102 ' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:33.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.102 --rc genhtml_branch_coverage=1 00:30:33.102 --rc genhtml_function_coverage=1 00:30:33.102 --rc genhtml_legend=1 00:30:33.102 --rc geninfo_all_blocks=1 00:30:33.102 --rc geninfo_unexecuted_blocks=1 00:30:33.102 00:30:33.102 ' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:33.102 09:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:35.003 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:35.003 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:35.003 Found net devices under 0000:09:00.0: cvl_0_0 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:35.003 Found net devices under 0000:09:00.1: cvl_0_1 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.003 09:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.262 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:30:35.263 00:30:35.263 --- 10.0.0.2 ping statistics --- 00:30:35.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.263 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:30:35.263 00:30:35.263 --- 10.0.0.1 ping statistics --- 00:30:35.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.263 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3490990 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3490990 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3490990 ']' 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:35.263 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:35.263 [2024-11-20 09:21:12.174291] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:35.263 [2024-11-20 09:21:12.175577] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:30:35.263 [2024-11-20 09:21:12.175655] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.263 [2024-11-20 09:21:12.252913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:35.521 [2024-11-20 09:21:12.316801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.521 [2024-11-20 09:21:12.316856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.521 [2024-11-20 09:21:12.316884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.521 [2024-11-20 09:21:12.316896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.521 [2024-11-20 09:21:12.316906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.521 [2024-11-20 09:21:12.318566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:35.521 [2024-11-20 09:21:12.318631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:35.521 [2024-11-20 09:21:12.318676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:35.521 [2024-11-20 09:21:12.318679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.521 [2024-11-20 09:21:12.418515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:35.521 [2024-11-20 09:21:12.418749] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:35.521 [2024-11-20 09:21:12.419040] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:35.521 [2024-11-20 09:21:12.419712] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:35.521 [2024-11-20 09:21:12.419972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:35.521 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:35.521 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:30:35.521 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:35.521 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:35.521 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:35.521 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.521 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.521 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.521 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:35.522 [2024-11-20 09:21:12.471468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:35.522 Malloc0 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:35.522 [2024-11-20 09:21:12.531565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:35.522 { 00:30:35.522 "params": { 00:30:35.522 "name": "Nvme$subsystem", 00:30:35.522 "trtype": "$TEST_TRANSPORT", 00:30:35.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:35.522 "adrfam": "ipv4", 00:30:35.522 "trsvcid": "$NVMF_PORT", 00:30:35.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:35.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:35.522 "hdgst": ${hdgst:-false}, 00:30:35.522 "ddgst": ${ddgst:-false} 00:30:35.522 }, 00:30:35.522 "method": "bdev_nvme_attach_controller" 00:30:35.522 } 00:30:35.522 EOF 00:30:35.522 )") 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:30:35.522 09:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:35.522 "params": { 00:30:35.522 "name": "Nvme1", 00:30:35.522 "trtype": "tcp", 00:30:35.522 "traddr": "10.0.0.2", 00:30:35.522 "adrfam": "ipv4", 00:30:35.522 "trsvcid": "4420", 00:30:35.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:35.522 "hdgst": false, 00:30:35.522 "ddgst": false 00:30:35.522 }, 00:30:35.522 "method": "bdev_nvme_attach_controller" 00:30:35.522 }' 00:30:35.779 [2024-11-20 09:21:12.582479] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:30:35.779 [2024-11-20 09:21:12.582558] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491105 ] 00:30:35.779 [2024-11-20 09:21:12.651711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:35.779 [2024-11-20 09:21:12.716952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.779 [2024-11-20 09:21:12.719323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.779 [2024-11-20 09:21:12.719336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.344 I/O targets: 00:30:36.344 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:36.344 00:30:36.344 00:30:36.344 CUnit - A unit testing framework for C - Version 2.1-3 00:30:36.344 http://cunit.sourceforge.net/ 00:30:36.344 00:30:36.344 00:30:36.344 Suite: bdevio tests on: Nvme1n1 00:30:36.344 Test: blockdev write read block ...passed 00:30:36.344 Test: blockdev write zeroes read block ...passed 00:30:36.344 Test: blockdev write zeroes read no split ...passed 00:30:36.344 Test: blockdev write zeroes read split ...passed 00:30:36.344 Test: blockdev write zeroes read split partial ...passed 00:30:36.344 Test: blockdev reset ...[2024-11-20 09:21:13.172602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:36.344 [2024-11-20 09:21:13.172719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a4640 (9): Bad file descriptor 00:30:36.344 [2024-11-20 09:21:13.305487] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:30:36.344 passed 00:30:36.344 Test: blockdev write read 8 blocks ...passed 00:30:36.344 Test: blockdev write read size > 128k ...passed 00:30:36.344 Test: blockdev write read invalid size ...passed 00:30:36.344 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:36.344 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:36.344 Test: blockdev write read max offset ...passed 00:30:36.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:36.602 Test: blockdev writev readv 8 blocks ...passed 00:30:36.602 Test: blockdev writev readv 30 x 1block ...passed 00:30:36.602 Test: blockdev writev readv block ...passed 00:30:36.602 Test: blockdev writev readv size > 128k ...passed 00:30:36.602 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:36.602 Test: blockdev comparev and writev ...[2024-11-20 09:21:13.561592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:36.602 [2024-11-20 09:21:13.561629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:36.602 [2024-11-20 09:21:13.561654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:36.602 [2024-11-20 09:21:13.561672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:36.602 [2024-11-20 09:21:13.562079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:36.602 [2024-11-20 09:21:13.562111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:36.602 [2024-11-20 09:21:13.562134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:36.602 [2024-11-20 09:21:13.562150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:36.602 [2024-11-20 09:21:13.562573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:36.602 [2024-11-20 09:21:13.562598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:36.602 [2024-11-20 09:21:13.562619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:36.602 [2024-11-20 09:21:13.562635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:36.602 [2024-11-20 09:21:13.563039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:36.602 [2024-11-20 09:21:13.563063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:36.602 [2024-11-20 09:21:13.563084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:36.602 [2024-11-20 09:21:13.563100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:36.602 passed 00:30:36.859 Test: blockdev nvme passthru rw ...passed 00:30:36.859 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:21:13.645616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:36.859 [2024-11-20 09:21:13.645647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:36.859 [2024-11-20 09:21:13.645795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:36.859 [2024-11-20 09:21:13.645818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:36.859 [2024-11-20 09:21:13.645963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:36.859 [2024-11-20 09:21:13.645985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:36.859 [2024-11-20 09:21:13.646127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:36.859 [2024-11-20 09:21:13.646150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:36.859 passed 00:30:36.859 Test: blockdev nvme admin passthru ...passed 00:30:36.859 Test: blockdev copy ...passed 00:30:36.859 00:30:36.859 Run Summary: Type Total Ran Passed Failed Inactive 00:30:36.859 suites 1 1 n/a 0 0 00:30:36.859 tests 23 23 23 0 0 00:30:36.859 asserts 152 152 152 0 n/a 00:30:36.859 00:30:36.859 Elapsed time = 1.278 seconds 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.117 rmmod nvme_tcp 00:30:37.117 rmmod nvme_fabrics 00:30:37.117 rmmod nvme_keyring 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3490990 ']' 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3490990 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3490990 ']' 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3490990 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:37.117 09:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3490990 00:30:37.117 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:30:37.117 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:30:37.117 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3490990' 00:30:37.117 killing process with pid 3490990 00:30:37.117 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3490990 00:30:37.117 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3490990 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.375 09:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.908 09:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.908 00:30:39.908 real 0m6.640s 00:30:39.908 user 0m9.703s 00:30:39.908 sys 0m2.622s 00:30:39.908 09:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:39.908 09:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:39.908 ************************************ 00:30:39.908 END TEST nvmf_bdevio 00:30:39.908 ************************************ 00:30:39.908 09:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:39.908 00:30:39.908 real 3m56.707s 00:30:39.908 user 8m57.932s 00:30:39.908 sys 1m24.422s 00:30:39.908 09:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:39.908 09:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:39.908 ************************************ 00:30:39.908 END TEST nvmf_target_core_interrupt_mode 00:30:39.908 ************************************ 00:30:39.908 09:21:16 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:39.908 09:21:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:39.908 09:21:16 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:39.908 09:21:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:39.908 ************************************ 00:30:39.908 START TEST nvmf_interrupt 00:30:39.908 ************************************ 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:39.908 * Looking for test storage... 00:30:39.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:39.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.908 --rc genhtml_branch_coverage=1 00:30:39.908 --rc genhtml_function_coverage=1 00:30:39.908 --rc genhtml_legend=1 00:30:39.908 --rc geninfo_all_blocks=1 00:30:39.908 --rc geninfo_unexecuted_blocks=1 00:30:39.908 00:30:39.908 ' 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:39.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.908 --rc genhtml_branch_coverage=1 00:30:39.908 --rc genhtml_function_coverage=1 00:30:39.908 --rc genhtml_legend=1 00:30:39.908 --rc geninfo_all_blocks=1 00:30:39.908 --rc geninfo_unexecuted_blocks=1 00:30:39.908 00:30:39.908 ' 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:39.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.908 --rc genhtml_branch_coverage=1 00:30:39.908 --rc genhtml_function_coverage=1 00:30:39.908 --rc genhtml_legend=1 00:30:39.908 --rc geninfo_all_blocks=1 00:30:39.908 --rc geninfo_unexecuted_blocks=1 00:30:39.908 00:30:39.908 ' 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:39.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.908 --rc genhtml_branch_coverage=1 00:30:39.908 --rc genhtml_function_coverage=1 00:30:39.908 --rc genhtml_legend=1 00:30:39.908 --rc geninfo_all_blocks=1 00:30:39.908 --rc geninfo_unexecuted_blocks=1 00:30:39.908 00:30:39.908 ' 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.908 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.909 09:21:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.808 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.808 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.808 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.808 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:41.809 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:41.809 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:41.809 Found net devices under 0000:09:00.0: cvl_0_0 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:41.809 Found net devices under 0000:09:00.1: cvl_0_1 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:41.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:30:41.809 00:30:41.809 --- 10.0.0.2 ping statistics --- 00:30:41.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.809 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:30:41.809 00:30:41.809 --- 10.0.0.1 ping statistics --- 00:30:41.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.809 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3493229 00:30:41.809 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:41.810 09:21:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3493229 00:30:41.810 09:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 3493229 ']' 00:30:41.810 09:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.810 09:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:41.810 09:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.810 09:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:41.810 09:21:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:41.810 [2024-11-20 09:21:18.800843] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:41.810 [2024-11-20 09:21:18.801899] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:30:41.810 [2024-11-20 09:21:18.801963] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.083 [2024-11-20 09:21:18.873214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:42.083 [2024-11-20 09:21:18.929487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.083 [2024-11-20 09:21:18.929547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.083 [2024-11-20 09:21:18.929575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.083 [2024-11-20 09:21:18.929586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.083 [2024-11-20 09:21:18.929596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.083 [2024-11-20 09:21:18.930939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.083 [2024-11-20 09:21:18.930945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.083 [2024-11-20 09:21:19.018630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:42.083 [2024-11-20 09:21:19.018678] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:42.083 [2024-11-20 09:21:19.018899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:42.083 5000+0 records in 00:30:42.083 5000+0 records out 00:30:42.083 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0122933 s, 833 MB/s 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.083 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:42.342 AIO0 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:42.342 [2024-11-20 09:21:19.127550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:42.342 [2024-11-20 09:21:19.151763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:42.342 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3493229 0 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3493229 0 idle 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3493229 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3493229 -w 256 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3493229 root 20 0 128.2g 47232 34560 S 6.7 0.1 0:00.27 reactor_0' 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3493229 root 20 0 128.2g 47232 34560 S 6.7 0.1 0:00.27 reactor_0 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3493229 1 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3493229 1 idle 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3493229 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3493229 -w 256 00:30:42.343 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3493234 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.00 reactor_1' 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3493234 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.00 reactor_1 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3493388 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3493229 0 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3493229 0 busy 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3493229 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3493229 -w 256 00:30:42.601 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:42.859 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3493229 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.27 reactor_0' 00:30:42.859 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3493229 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.27 reactor_0 00:30:42.859 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:42.859 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:42.859 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:42.859 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:42.859 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:42.859 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:42.859 09:21:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:30:43.792 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:30:43.792 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:43.792 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3493229 -w 256 00:30:43.792 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3493229 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.46 reactor_0' 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3493229 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.46 reactor_0 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3493229 1 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3493229 1 busy 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3493229 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3493229 -w 256 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3493234 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.25 reactor_1' 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3493234 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.25 reactor_1 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:44.050 09:21:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:44.050 09:21:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:44.050 09:21:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:44.050 09:21:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:44.050 09:21:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:44.050 09:21:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:44.050 09:21:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:44.050 09:21:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3493388 00:30:54.016 Initializing NVMe Controllers 00:30:54.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.016 Controller IO queue size 256, less than required. 00:30:54.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:54.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:54.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:54.016 Initialization complete. Launching workers. 00:30:54.016 ======================================================== 00:30:54.016 Latency(us) 00:30:54.016 Device Information : IOPS MiB/s Average min max 00:30:54.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13709.70 53.55 18685.77 4449.23 59302.18 00:30:54.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13696.60 53.50 18702.70 4256.03 22742.77 00:30:54.016 ======================================================== 00:30:54.016 Total : 27406.30 107.06 18694.23 4256.03 59302.18 00:30:54.016 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3493229 0 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3493229 0 idle 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3493229 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3493229 -w 256 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3493229 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.21 reactor_0' 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3493229 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.21 reactor_0 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:54.016 09:21:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3493229 1 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3493229 1 idle 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3493229 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3493229 -w 256 00:30:54.017 09:21:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3493234 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1' 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3493234 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:54.017 09:21:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3493229 0 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3493229 0 idle 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3493229 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3493229 -w 256 00:30:55.394 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3493229 root 20 0 128.2g 61056 34944 S 0.0 0.1 0:20.31 reactor_0' 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3493229 root 20 0 128.2g 61056 34944 S 0.0 0.1 0:20.31 reactor_0 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:55.658 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3493229 1 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3493229 1 idle 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3493229 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3493229 -w 256 00:30:55.659 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3493234 root 20 0 128.2g 61056 34944 S 0.0 0.1 0:10.02 reactor_1' 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3493234 root 20 0 128.2g 61056 34944 S 0.0 0.1 0:10.02 reactor_1 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:55.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.920 rmmod nvme_tcp 00:30:55.920 rmmod nvme_fabrics 00:30:55.920 rmmod nvme_keyring 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3493229 ']' 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3493229 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 3493229 ']' 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 3493229 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:55.920 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3493229 00:30:56.177 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:56.177 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:56.177 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3493229' 00:30:56.177 killing process with pid 3493229 00:30:56.177 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 3493229 00:30:56.177 09:21:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 3493229 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:56.177 09:21:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.713 09:21:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:58.713 00:30:58.713 real 0m18.859s 00:30:58.713 user 0m37.604s 00:30:58.713 sys 0m6.388s 00:30:58.713 09:21:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:58.713 09:21:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:58.713 ************************************ 00:30:58.713 END TEST nvmf_interrupt 00:30:58.713 ************************************ 00:30:58.713 00:30:58.713 real 25m8.270s 00:30:58.713 user 59m8.560s 00:30:58.713 sys 6m42.037s 00:30:58.713 09:21:35 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:58.713 09:21:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.713 ************************************ 00:30:58.713 END TEST nvmf_tcp 00:30:58.713 ************************************ 00:30:58.713 09:21:35 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:30:58.713 09:21:35 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:58.713 09:21:35 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:58.713 09:21:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:58.713 09:21:35 -- common/autotest_common.sh@10 -- # set +x 00:30:58.713 ************************************ 00:30:58.713 START TEST spdkcli_nvmf_tcp 00:30:58.713 ************************************ 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:58.713 * Looking for test storage... 00:30:58.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:58.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.713 --rc genhtml_branch_coverage=1 00:30:58.713 --rc genhtml_function_coverage=1 00:30:58.713 --rc genhtml_legend=1 00:30:58.713 --rc geninfo_all_blocks=1 00:30:58.713 --rc geninfo_unexecuted_blocks=1 00:30:58.713 00:30:58.713 ' 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:58.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.713 --rc genhtml_branch_coverage=1 00:30:58.713 --rc genhtml_function_coverage=1 00:30:58.713 --rc genhtml_legend=1 00:30:58.713 --rc geninfo_all_blocks=1 00:30:58.713 --rc geninfo_unexecuted_blocks=1 00:30:58.713 00:30:58.713 ' 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:58.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.713 --rc genhtml_branch_coverage=1 00:30:58.713 --rc genhtml_function_coverage=1 00:30:58.713 --rc genhtml_legend=1 00:30:58.713 --rc geninfo_all_blocks=1 00:30:58.713 --rc geninfo_unexecuted_blocks=1 00:30:58.713 00:30:58.713 ' 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:58.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.713 --rc genhtml_branch_coverage=1 00:30:58.713 --rc genhtml_function_coverage=1 00:30:58.713 --rc genhtml_legend=1 00:30:58.713 --rc geninfo_all_blocks=1 00:30:58.713 --rc geninfo_unexecuted_blocks=1 00:30:58.713 00:30:58.713 ' 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.713 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:58.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3495400 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3495400 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 3495400 ']' 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:58.714 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.714 [2024-11-20 09:21:35.517067] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:30:58.714 [2024-11-20 09:21:35.517153] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495400 ] 00:30:58.714 [2024-11-20 09:21:35.581810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:58.714 [2024-11-20 09:21:35.639823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.714 [2024-11-20 09:21:35.639828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:58.971 09:21:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:58.971 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:58.971 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:58.971 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:58.971 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:58.971 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:58.971 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:58.971 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:58.972 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:58.972 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:58.972 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:58.972 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:58.972 ' 00:31:01.496 [2024-11-20 09:21:38.402028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.867 [2024-11-20 09:21:39.674447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:05.392 [2024-11-20 09:21:42.057769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:07.290 [2024-11-20 09:21:44.067989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:08.664 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:08.664 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:08.664 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:08.664 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:08.664 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:08.664 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:08.664 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:08.664 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:08.664 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:08.664 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:08.664 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:08.664 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:08.921 09:21:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:08.921 09:21:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:08.921 09:21:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.921 09:21:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:08.921 09:21:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:08.921 09:21:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.921 09:21:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:08.921 09:21:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:09.178 09:21:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:09.435 09:21:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:09.435 09:21:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:09.435 09:21:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:09.435 09:21:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:09.435 09:21:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:09.435 09:21:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:09.435 09:21:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:09.435 09:21:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:09.435 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:09.435 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:09.435 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:09.435 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:09.435 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:09.435 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:09.435 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:09.435 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:09.435 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:09.435 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:09.435 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:09.435 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:09.435 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:09.435 ' 00:31:14.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:14.695 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:14.695 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:14.695 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:14.695 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:14.695 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:14.695 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:14.695 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:14.695 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:14.695 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:14.695 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:14.695 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:14.695 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:14.695 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:14.695 09:21:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:14.695 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:14.695 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:14.695 09:21:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3495400 00:31:14.695 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3495400 ']' 00:31:14.695 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3495400 00:31:14.695 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:31:14.695 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:14.695 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3495400 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3495400' 00:31:14.952 killing process with pid 3495400 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 3495400 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 3495400 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3495400 ']' 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3495400 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3495400 ']' 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3495400 00:31:14.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3495400) - No such process 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 3495400 is not found' 00:31:14.952 Process with pid 3495400 is not found 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:14.952 00:31:14.952 real 0m16.645s 00:31:14.952 user 0m35.504s 00:31:14.952 sys 0m0.763s 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:14.952 09:21:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:14.952 ************************************ 00:31:14.952 END TEST spdkcli_nvmf_tcp 00:31:14.952 ************************************ 00:31:15.210 09:21:51 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:15.211 09:21:51 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:15.211 09:21:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:15.211 09:21:51 -- common/autotest_common.sh@10 -- # set +x 00:31:15.211 ************************************ 00:31:15.211 START TEST nvmf_identify_passthru 00:31:15.211 ************************************ 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:15.211 * Looking for test storage... 00:31:15.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:15.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.211 --rc genhtml_branch_coverage=1 00:31:15.211 --rc genhtml_function_coverage=1 00:31:15.211 --rc genhtml_legend=1 00:31:15.211 --rc geninfo_all_blocks=1 00:31:15.211 --rc geninfo_unexecuted_blocks=1 00:31:15.211 00:31:15.211 ' 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:15.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.211 --rc genhtml_branch_coverage=1 00:31:15.211 --rc genhtml_function_coverage=1 00:31:15.211 --rc genhtml_legend=1 00:31:15.211 --rc geninfo_all_blocks=1 00:31:15.211 --rc geninfo_unexecuted_blocks=1 00:31:15.211 00:31:15.211 ' 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:15.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.211 --rc genhtml_branch_coverage=1 00:31:15.211 --rc genhtml_function_coverage=1 00:31:15.211 --rc genhtml_legend=1 00:31:15.211 --rc geninfo_all_blocks=1 00:31:15.211 --rc geninfo_unexecuted_blocks=1 00:31:15.211 00:31:15.211 ' 00:31:15.211 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:15.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.211 --rc genhtml_branch_coverage=1 00:31:15.211 --rc genhtml_function_coverage=1 00:31:15.211 --rc genhtml_legend=1 00:31:15.211 --rc geninfo_all_blocks=1 00:31:15.211 --rc geninfo_unexecuted_blocks=1 00:31:15.211 00:31:15.211 ' 00:31:15.211 09:21:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.211 09:21:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.211 09:21:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.211 09:21:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.211 09:21:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:15.211 09:21:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:15.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.211 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.211 09:21:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.211 09:21:52 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.212 09:21:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.212 09:21:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.212 09:21:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.212 09:21:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:15.212 09:21:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.212 09:21:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:15.212 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.212 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.212 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.212 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.212 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.212 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.212 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:15.212 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.212 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.212 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.212 09:21:52 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.212 09:21:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.739 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:17.740 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:17.740 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:17.740 Found net devices under 0000:09:00.0: cvl_0_0 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:17.740 Found net devices under 0000:09:00.1: cvl_0_1 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:31:17.740 00:31:17.740 --- 10.0.0.2 ping statistics --- 00:31:17.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.740 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:31:17.740 00:31:17.740 --- 10.0.0.1 ping statistics --- 00:31:17.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.740 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.740 09:21:54 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.740 09:21:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:17.740 09:21:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:31:17.740 09:21:54 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:0b:00.0 00:31:17.740 09:21:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:31:17.740 09:21:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:31:17.740 09:21:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:31:17.740 09:21:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:17.740 09:21:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:21.925 09:21:58 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:31:21.925 09:21:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:31:21.925 09:21:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:21.925 09:21:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:26.236 09:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:26.236 09:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.236 09:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.236 09:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3499939 00:31:26.236 09:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:26.236 09:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:26.236 09:22:02 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3499939 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 3499939 ']' 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:26.236 09:22:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.236 [2024-11-20 09:22:02.841381] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:31:26.236 [2024-11-20 09:22:02.841472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.236 [2024-11-20 09:22:02.917692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:26.236 [2024-11-20 09:22:02.976102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.236 [2024-11-20 09:22:02.976152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.236 [2024-11-20 09:22:02.976181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.236 [2024-11-20 09:22:02.976192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.236 [2024-11-20 09:22:02.976202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.236 [2024-11-20 09:22:02.977752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.236 [2024-11-20 09:22:02.977777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.236 [2024-11-20 09:22:02.977834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:26.236 [2024-11-20 09:22:02.977838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:31:26.236 09:22:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.236 INFO: Log level set to 20 00:31:26.236 INFO: Requests: 00:31:26.236 { 00:31:26.236 "jsonrpc": "2.0", 00:31:26.236 "method": "nvmf_set_config", 00:31:26.236 "id": 1, 00:31:26.236 "params": { 00:31:26.236 "admin_cmd_passthru": { 00:31:26.236 "identify_ctrlr": true 00:31:26.236 } 00:31:26.236 } 00:31:26.236 } 00:31:26.236 00:31:26.236 INFO: response: 00:31:26.236 { 00:31:26.236 "jsonrpc": "2.0", 00:31:26.236 "id": 1, 00:31:26.236 "result": true 00:31:26.236 } 00:31:26.236 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.236 09:22:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.236 INFO: Setting log level to 20 00:31:26.236 INFO: Setting log level to 20 00:31:26.236 INFO: Log level set to 20 00:31:26.236 INFO: Log level set to 20 00:31:26.236 INFO: Requests: 00:31:26.236 { 00:31:26.236 "jsonrpc": "2.0", 00:31:26.236 "method": "framework_start_init", 00:31:26.236 "id": 1 00:31:26.236 } 00:31:26.236 00:31:26.236 INFO: Requests: 00:31:26.236 { 00:31:26.236 "jsonrpc": "2.0", 00:31:26.236 "method": "framework_start_init", 00:31:26.236 "id": 1 00:31:26.236 } 00:31:26.236 00:31:26.236 [2024-11-20 09:22:03.163504] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:26.236 INFO: response: 00:31:26.236 { 00:31:26.236 "jsonrpc": "2.0", 00:31:26.236 "id": 1, 00:31:26.236 "result": true 00:31:26.236 } 00:31:26.236 00:31:26.236 INFO: response: 00:31:26.236 { 00:31:26.236 "jsonrpc": "2.0", 00:31:26.236 "id": 1, 00:31:26.236 "result": true 00:31:26.236 } 00:31:26.236 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.236 09:22:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.236 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.236 INFO: Setting log level to 40 00:31:26.236 INFO: Setting log level to 40 00:31:26.236 INFO: Setting log level to 40 00:31:26.237 [2024-11-20 09:22:03.173595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.237 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.237 09:22:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:26.237 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:26.237 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.237 09:22:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:31:26.237 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.237 09:22:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 Nvme0n1 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 [2024-11-20 09:22:06.076950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.518 [ 00:31:29.518 { 00:31:29.518 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:29.518 "subtype": "Discovery", 00:31:29.518 "listen_addresses": [], 00:31:29.518 "allow_any_host": true, 00:31:29.518 "hosts": [] 00:31:29.518 }, 00:31:29.518 { 00:31:29.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:29.518 "subtype": "NVMe", 00:31:29.518 "listen_addresses": [ 00:31:29.518 { 00:31:29.518 "trtype": "TCP", 00:31:29.518 "adrfam": "IPv4", 00:31:29.518 "traddr": "10.0.0.2", 00:31:29.518 "trsvcid": "4420" 00:31:29.518 } 00:31:29.518 ], 00:31:29.518 "allow_any_host": true, 00:31:29.518 "hosts": [], 00:31:29.518 "serial_number": "SPDK00000000000001", 00:31:29.518 "model_number": "SPDK bdev Controller", 00:31:29.518 "max_namespaces": 1, 00:31:29.518 "min_cntlid": 1, 00:31:29.518 "max_cntlid": 65519, 00:31:29.518 "namespaces": [ 00:31:29.518 { 00:31:29.518 "nsid": 1, 00:31:29.518 "bdev_name": "Nvme0n1", 00:31:29.518 "name": "Nvme0n1", 00:31:29.518 "nguid": "5D4A99946F9B4B049382136F51E2086A", 00:31:29.518 "uuid": "5d4a9994-6f9b-4b04-9382-136f51e2086a" 00:31:29.518 } 00:31:29.518 ] 00:31:29.518 } 00:31:29.518 ] 00:31:29.518 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:29.518 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:29.776 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:29.776 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:31:29.776 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:29.776 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:29.776 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.776 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:29.776 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.776 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:29.776 09:22:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:29.776 rmmod nvme_tcp 00:31:29.776 rmmod nvme_fabrics 00:31:29.776 rmmod nvme_keyring 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3499939 ']' 00:31:29.776 09:22:06 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3499939 00:31:29.776 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 3499939 ']' 00:31:29.776 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 3499939 00:31:29.776 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:31:29.776 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:29.777 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3499939 00:31:29.777 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:29.777 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:29.777 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3499939' 00:31:29.777 killing process with pid 3499939 00:31:29.777 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 3499939 00:31:29.777 09:22:06 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 3499939 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.674 09:22:08 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.674 09:22:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:31.674 09:22:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.580 09:22:10 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.580 00:31:33.580 real 0m18.302s 00:31:33.580 user 0m26.790s 00:31:33.580 sys 0m3.249s 00:31:33.580 09:22:10 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:33.580 09:22:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.580 ************************************ 00:31:33.580 END TEST nvmf_identify_passthru 00:31:33.580 ************************************ 00:31:33.580 09:22:10 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:33.580 09:22:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:33.580 09:22:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:33.580 09:22:10 -- common/autotest_common.sh@10 -- # set +x 00:31:33.580 ************************************ 00:31:33.580 START TEST nvmf_dif 00:31:33.580 ************************************ 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:33.580 * Looking for test storage... 00:31:33.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.580 09:22:10 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:33.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.580 --rc genhtml_branch_coverage=1 00:31:33.580 --rc genhtml_function_coverage=1 00:31:33.580 --rc genhtml_legend=1 00:31:33.580 --rc geninfo_all_blocks=1 00:31:33.580 --rc geninfo_unexecuted_blocks=1 00:31:33.580 00:31:33.580 ' 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:33.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.580 --rc genhtml_branch_coverage=1 00:31:33.580 --rc genhtml_function_coverage=1 00:31:33.580 --rc genhtml_legend=1 00:31:33.580 --rc geninfo_all_blocks=1 00:31:33.580 --rc geninfo_unexecuted_blocks=1 00:31:33.580 00:31:33.580 ' 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:33.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.580 --rc genhtml_branch_coverage=1 00:31:33.580 --rc genhtml_function_coverage=1 00:31:33.580 --rc genhtml_legend=1 00:31:33.580 --rc geninfo_all_blocks=1 00:31:33.580 --rc geninfo_unexecuted_blocks=1 00:31:33.580 00:31:33.580 ' 00:31:33.580 09:22:10 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:33.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.580 --rc genhtml_branch_coverage=1 00:31:33.580 --rc genhtml_function_coverage=1 00:31:33.580 --rc genhtml_legend=1 00:31:33.580 --rc geninfo_all_blocks=1 00:31:33.580 --rc geninfo_unexecuted_blocks=1 00:31:33.580 00:31:33.580 ' 00:31:33.580 09:22:10 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.580 09:22:10 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.581 09:22:10 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.581 09:22:10 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.581 09:22:10 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.581 09:22:10 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.581 09:22:10 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.581 09:22:10 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.581 09:22:10 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.581 09:22:10 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:33.581 09:22:10 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:33.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.581 09:22:10 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:33.581 09:22:10 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:33.581 09:22:10 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:33.581 09:22:10 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:33.581 09:22:10 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.581 09:22:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:33.581 09:22:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.581 09:22:10 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.581 09:22:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.110 09:22:12 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:36.111 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:36.111 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:36.111 Found net devices under 0000:09:00.0: cvl_0_0 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:36.111 Found net devices under 0000:09:00.1: cvl_0_1 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:31:36.111 00:31:36.111 --- 10.0.0.2 ping statistics --- 00:31:36.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.111 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:31:36.111 00:31:36.111 --- 10.0.0.1 ping statistics --- 00:31:36.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.111 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:36.111 09:22:12 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:37.047 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:37.047 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:37.047 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:37.047 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:37.047 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:37.047 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:37.047 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:37.047 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:37.047 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:37.047 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:37.047 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:37.047 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:37.047 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:37.047 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:37.047 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:37.047 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:37.048 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:37.305 09:22:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:37.305 09:22:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:37.305 09:22:14 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:37.305 09:22:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3503194 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:37.305 09:22:14 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3503194 00:31:37.305 09:22:14 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 3503194 ']' 00:31:37.305 09:22:14 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.305 09:22:14 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:37.305 09:22:14 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.305 09:22:14 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:37.305 09:22:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:37.305 [2024-11-20 09:22:14.173985] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:31:37.305 [2024-11-20 09:22:14.174078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.305 [2024-11-20 09:22:14.247504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.305 [2024-11-20 09:22:14.308948] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.305 [2024-11-20 09:22:14.309000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.305 [2024-11-20 09:22:14.309030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.305 [2024-11-20 09:22:14.309042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.305 [2024-11-20 09:22:14.309052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.305 [2024-11-20 09:22:14.309679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:31:37.563 09:22:14 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:37.563 09:22:14 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.563 09:22:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:37.563 09:22:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:37.563 [2024-11-20 09:22:14.453969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.563 09:22:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:37.563 09:22:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:37.563 ************************************ 00:31:37.563 START TEST fio_dif_1_default 00:31:37.563 ************************************ 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:37.563 bdev_null0 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:37.563 [2024-11-20 09:22:14.514231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:37.563 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:37.564 { 00:31:37.564 "params": { 00:31:37.564 "name": "Nvme$subsystem", 00:31:37.564 "trtype": "$TEST_TRANSPORT", 00:31:37.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.564 "adrfam": "ipv4", 00:31:37.564 "trsvcid": "$NVMF_PORT", 00:31:37.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.564 "hdgst": ${hdgst:-false}, 00:31:37.564 "ddgst": ${ddgst:-false} 00:31:37.564 }, 00:31:37.564 "method": "bdev_nvme_attach_controller" 00:31:37.564 } 00:31:37.564 EOF 00:31:37.564 )") 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:37.564 "params": { 00:31:37.564 "name": "Nvme0", 00:31:37.564 "trtype": "tcp", 00:31:37.564 "traddr": "10.0.0.2", 00:31:37.564 "adrfam": "ipv4", 00:31:37.564 "trsvcid": "4420", 00:31:37.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:37.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:37.564 "hdgst": false, 00:31:37.564 "ddgst": false 00:31:37.564 }, 00:31:37.564 "method": "bdev_nvme_attach_controller" 00:31:37.564 }' 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:37.564 09:22:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:37.822 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:37.822 fio-3.35 00:31:37.822 Starting 1 thread 00:31:50.012 00:31:50.012 filename0: (groupid=0, jobs=1): err= 0: pid=3503424: Wed Nov 20 09:22:25 2024 00:31:50.012 read: IOPS=99, BW=396KiB/s (406kB/s)(3968KiB/10016msec) 00:31:50.012 slat (nsec): min=5163, max=76128, avg=9271.30, stdev=4573.61 00:31:50.012 clat (usec): min=586, max=45373, avg=40355.46, stdev=5093.34 00:31:50.012 lat (usec): min=593, max=45402, avg=40364.73, stdev=5093.30 00:31:50.012 clat percentiles (usec): 00:31:50.012 | 1.00th=[ 693], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:50.012 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:50.012 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:50.012 | 99.00th=[41681], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:31:50.012 | 99.99th=[45351] 00:31:50.012 bw ( KiB/s): min= 384, max= 448, per=99.71%, avg=395.20, stdev=18.79, samples=20 00:31:50.012 iops : min= 96, max= 112, avg=98.80, stdev= 4.70, samples=20 00:31:50.012 lat (usec) : 750=1.51%, 1000=0.10% 00:31:50.012 lat (msec) : 50=98.39% 00:31:50.012 cpu : usr=90.50%, sys=9.23%, ctx=15, majf=0, minf=225 00:31:50.012 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:50.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.012 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.012 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:50.012 00:31:50.012 Run status group 0 (all jobs): 00:31:50.012 READ: bw=396KiB/s (406kB/s), 396KiB/s-396KiB/s (406kB/s-406kB/s), io=3968KiB (4063kB), run=10016-10016msec 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 00:31:50.012 real 0m11.215s 00:31:50.012 user 0m10.199s 00:31:50.012 sys 0m1.169s 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 ************************************ 00:31:50.012 END TEST fio_dif_1_default 00:31:50.012 ************************************ 00:31:50.012 09:22:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:50.012 09:22:25 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:50.012 09:22:25 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 ************************************ 00:31:50.012 START TEST fio_dif_1_multi_subsystems 00:31:50.012 ************************************ 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 bdev_null0 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 [2024-11-20 09:22:25.782833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 bdev_null1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:50.012 { 00:31:50.012 "params": { 00:31:50.012 "name": "Nvme$subsystem", 00:31:50.012 "trtype": "$TEST_TRANSPORT", 00:31:50.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.012 "adrfam": "ipv4", 00:31:50.012 "trsvcid": "$NVMF_PORT", 00:31:50.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.012 "hdgst": ${hdgst:-false}, 00:31:50.012 "ddgst": ${ddgst:-false} 00:31:50.012 }, 00:31:50.012 "method": "bdev_nvme_attach_controller" 00:31:50.012 } 00:31:50.012 EOF 00:31:50.012 )") 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:50.012 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:50.013 { 00:31:50.013 "params": { 00:31:50.013 "name": "Nvme$subsystem", 00:31:50.013 "trtype": "$TEST_TRANSPORT", 00:31:50.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:50.013 "adrfam": "ipv4", 00:31:50.013 "trsvcid": "$NVMF_PORT", 00:31:50.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:50.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:50.013 "hdgst": ${hdgst:-false}, 00:31:50.013 "ddgst": ${ddgst:-false} 00:31:50.013 }, 00:31:50.013 "method": "bdev_nvme_attach_controller" 00:31:50.013 } 00:31:50.013 EOF 00:31:50.013 )") 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:50.013 "params": { 00:31:50.013 "name": "Nvme0", 00:31:50.013 "trtype": "tcp", 00:31:50.013 "traddr": "10.0.0.2", 00:31:50.013 "adrfam": "ipv4", 00:31:50.013 "trsvcid": "4420", 00:31:50.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:50.013 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:50.013 "hdgst": false, 00:31:50.013 "ddgst": false 00:31:50.013 }, 00:31:50.013 "method": "bdev_nvme_attach_controller" 00:31:50.013 },{ 00:31:50.013 "params": { 00:31:50.013 "name": "Nvme1", 00:31:50.013 "trtype": "tcp", 00:31:50.013 "traddr": "10.0.0.2", 00:31:50.013 "adrfam": "ipv4", 00:31:50.013 "trsvcid": "4420", 00:31:50.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:50.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:50.013 "hdgst": false, 00:31:50.013 "ddgst": false 00:31:50.013 }, 00:31:50.013 "method": "bdev_nvme_attach_controller" 00:31:50.013 }' 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:50.013 09:22:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.013 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:50.013 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:50.013 fio-3.35 00:31:50.013 Starting 2 threads 00:31:59.981 00:31:59.981 filename0: (groupid=0, jobs=1): err= 0: pid=3504943: Wed Nov 20 09:22:36 2024 00:31:59.981 read: IOPS=196, BW=785KiB/s (804kB/s)(7856KiB/10009msec) 00:31:59.981 slat (usec): min=7, max=118, avg= 9.04, stdev= 3.71 00:31:59.981 clat (usec): min=548, max=42359, avg=20355.41, stdev=20324.70 00:31:59.981 lat (usec): min=555, max=42370, avg=20364.45, stdev=20324.48 00:31:59.981 clat percentiles (usec): 00:31:59.981 | 1.00th=[ 578], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 627], 00:31:59.981 | 30.00th=[ 635], 40.00th=[ 676], 50.00th=[ 742], 60.00th=[41157], 00:31:59.981 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:59.981 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:59.981 | 99.99th=[42206] 00:31:59.981 bw ( KiB/s): min= 704, max= 896, per=66.22%, avg=784.00, stdev=45.85, samples=20 00:31:59.981 iops : min= 176, max= 224, avg=196.00, stdev=11.46, samples=20 00:31:59.981 lat (usec) : 750=50.10%, 1000=0.81% 00:31:59.981 lat (msec) : 2=0.61%, 50=48.47% 00:31:59.981 cpu : usr=95.19%, sys=4.50%, ctx=14, majf=0, minf=205 00:31:59.981 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.981 issued rwts: total=1964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.981 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:59.981 filename1: (groupid=0, jobs=1): err= 0: pid=3504944: Wed Nov 20 09:22:36 2024 00:31:59.981 read: IOPS=99, BW=398KiB/s (407kB/s)(3984KiB/10014msec) 00:31:59.981 slat (nsec): min=7241, max=33850, avg=9382.06, stdev=3004.50 00:31:59.981 clat (usec): min=531, max=42299, avg=40185.20, stdev=5670.93 00:31:59.981 lat (usec): min=539, max=42310, avg=40194.58, stdev=5670.81 00:31:59.981 clat percentiles (usec): 00:31:59.981 | 1.00th=[ 619], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:59.981 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:59.981 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:59.981 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:59.981 | 99.99th=[42206] 00:31:59.981 bw ( KiB/s): min= 384, max= 416, per=33.49%, avg=396.80, stdev=16.08, samples=20 00:31:59.981 iops : min= 96, max= 104, avg=99.20, stdev= 4.02, samples=20 00:31:59.981 lat (usec) : 750=2.01% 00:31:59.981 lat (msec) : 50=97.99% 00:31:59.981 cpu : usr=94.87%, sys=4.82%, ctx=15, majf=0, minf=153 00:31:59.981 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.981 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.981 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:59.981 00:31:59.981 Run status group 0 (all jobs): 00:31:59.981 READ: bw=1182KiB/s (1211kB/s), 398KiB/s-785KiB/s (407kB/s-804kB/s), io=11.6MiB (12.1MB), run=10009-10014msec 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.239 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.240 00:32:00.240 real 0m11.376s 00:32:00.240 user 0m20.349s 00:32:00.240 sys 0m1.231s 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:00.240 09:22:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:00.240 ************************************ 00:32:00.240 END TEST fio_dif_1_multi_subsystems 00:32:00.240 ************************************ 00:32:00.240 09:22:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:00.240 09:22:37 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:00.240 09:22:37 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:00.240 09:22:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:00.240 ************************************ 00:32:00.240 START TEST fio_dif_rand_params 00:32:00.240 ************************************ 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.240 bdev_null0 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:00.240 [2024-11-20 09:22:37.210565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.240 { 00:32:00.240 "params": { 00:32:00.240 "name": "Nvme$subsystem", 00:32:00.240 "trtype": "$TEST_TRANSPORT", 00:32:00.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.240 "adrfam": "ipv4", 00:32:00.240 "trsvcid": "$NVMF_PORT", 00:32:00.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.240 "hdgst": ${hdgst:-false}, 00:32:00.240 "ddgst": ${ddgst:-false} 00:32:00.240 }, 00:32:00.240 "method": "bdev_nvme_attach_controller" 00:32:00.240 } 00:32:00.240 EOF 00:32:00.240 )") 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:00.240 "params": { 00:32:00.240 "name": "Nvme0", 00:32:00.240 "trtype": "tcp", 00:32:00.240 "traddr": "10.0.0.2", 00:32:00.240 "adrfam": "ipv4", 00:32:00.240 "trsvcid": "4420", 00:32:00.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:00.240 "hdgst": false, 00:32:00.240 "ddgst": false 00:32:00.240 }, 00:32:00.240 "method": "bdev_nvme_attach_controller" 00:32:00.240 }' 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:00.240 09:22:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:00.498 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:00.498 ... 00:32:00.498 fio-3.35 00:32:00.498 Starting 3 threads 00:32:07.060 00:32:07.060 filename0: (groupid=0, jobs=1): err= 0: pid=3506343: Wed Nov 20 09:22:43 2024 00:32:07.060 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(147MiB/5043msec) 00:32:07.060 slat (nsec): min=7638, max=44030, avg=15011.70, stdev=4336.62 00:32:07.060 clat (usec): min=4855, max=51909, avg=12844.07, stdev=2422.46 00:32:07.060 lat (usec): min=4867, max=51923, avg=12859.08, stdev=2422.88 00:32:07.060 clat percentiles (usec): 00:32:07.060 | 1.00th=[ 7111], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11338], 00:32:07.060 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12780], 60.00th=[13435], 00:32:07.060 | 70.00th=[13829], 80.00th=[14484], 90.00th=[15139], 95.00th=[15664], 00:32:07.060 | 99.00th=[16712], 99.50th=[17171], 99.90th=[44827], 99.95th=[52167], 00:32:07.060 | 99.99th=[52167] 00:32:07.060 bw ( KiB/s): min=28416, max=33024, per=33.97%, avg=29977.60, stdev=1445.39, samples=10 00:32:07.060 iops : min= 222, max= 258, avg=234.20, stdev=11.29, samples=10 00:32:07.060 lat (msec) : 10=6.05%, 20=93.78%, 50=0.09%, 100=0.09% 00:32:07.060 cpu : usr=93.30%, sys=6.19%, ctx=8, majf=0, minf=140 00:32:07.060 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.060 issued rwts: total=1173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:07.060 filename0: (groupid=0, jobs=1): err= 0: pid=3506344: Wed Nov 20 09:22:43 2024 00:32:07.060 read: IOPS=235, BW=29.4MiB/s (30.9MB/s)(149MiB/5044msec) 00:32:07.060 slat (nsec): min=4564, max=45767, avg=14222.11, stdev=3487.83 00:32:07.060 clat (usec): min=7769, max=52959, avg=12684.42, stdev=3494.02 00:32:07.060 lat (usec): min=7782, max=52978, avg=12698.65, stdev=3493.94 00:32:07.060 clat percentiles (usec): 00:32:07.060 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10421], 20.00th=[10945], 00:32:07.060 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[12780], 00:32:07.060 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14746], 95.00th=[15533], 00:32:07.060 | 99.00th=[16909], 99.50th=[48497], 99.90th=[52691], 99.95th=[53216], 00:32:07.060 | 99.99th=[53216] 00:32:07.060 bw ( KiB/s): min=28160, max=32000, per=34.41%, avg=30361.60, stdev=1220.00, samples=10 00:32:07.060 iops : min= 220, max= 250, avg=237.20, stdev= 9.53, samples=10 00:32:07.060 lat (msec) : 10=4.46%, 20=94.87%, 50=0.34%, 100=0.34% 00:32:07.060 cpu : usr=93.79%, sys=5.71%, ctx=9, majf=0, minf=65 00:32:07.060 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.060 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:07.060 filename0: (groupid=0, jobs=1): err= 0: pid=3506345: Wed Nov 20 09:22:43 2024 00:32:07.060 read: IOPS=223, BW=27.9MiB/s (29.2MB/s)(140MiB/5003msec) 00:32:07.060 slat (usec): min=7, max=108, avg=17.25, stdev= 6.49 00:32:07.060 clat (usec): min=8853, max=52364, avg=13426.45, stdev=3251.38 00:32:07.060 lat (usec): min=8861, max=52379, avg=13443.69, stdev=3251.26 00:32:07.060 clat percentiles (usec): 00:32:07.060 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[10814], 20.00th=[11600], 00:32:07.061 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13173], 60.00th=[13698], 00:32:07.061 | 70.00th=[14222], 80.00th=[14877], 90.00th=[15664], 95.00th=[16450], 00:32:07.061 | 99.00th=[17433], 99.50th=[47973], 99.90th=[52167], 99.95th=[52167], 00:32:07.061 | 99.99th=[52167] 00:32:07.061 bw ( KiB/s): min=25344, max=29696, per=32.29%, avg=28492.80, stdev=1294.43, samples=10 00:32:07.061 iops : min= 198, max= 232, avg=222.60, stdev=10.11, samples=10 00:32:07.061 lat (msec) : 10=2.06%, 20=97.40%, 50=0.27%, 100=0.27% 00:32:07.061 cpu : usr=91.62%, sys=6.52%, ctx=152, majf=0, minf=115 00:32:07.061 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.061 issued rwts: total=1116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.061 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:07.061 00:32:07.061 Run status group 0 (all jobs): 00:32:07.061 READ: bw=86.2MiB/s (90.4MB/s), 27.9MiB/s-29.4MiB/s (29.2MB/s-30.9MB/s), io=435MiB (456MB), run=5003-5044msec 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 bdev_null0 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 [2024-11-20 09:22:43.569902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 bdev_null1 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 bdev_null2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:07.061 { 00:32:07.061 "params": { 00:32:07.061 "name": "Nvme$subsystem", 00:32:07.061 "trtype": "$TEST_TRANSPORT", 00:32:07.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.061 "adrfam": "ipv4", 00:32:07.061 "trsvcid": "$NVMF_PORT", 00:32:07.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.061 "hdgst": ${hdgst:-false}, 00:32:07.061 "ddgst": ${ddgst:-false} 00:32:07.061 }, 00:32:07.061 "method": "bdev_nvme_attach_controller" 00:32:07.061 } 00:32:07.061 EOF 00:32:07.061 )") 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:07.061 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:07.062 { 00:32:07.062 "params": { 00:32:07.062 "name": "Nvme$subsystem", 00:32:07.062 "trtype": "$TEST_TRANSPORT", 00:32:07.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.062 "adrfam": "ipv4", 00:32:07.062 "trsvcid": "$NVMF_PORT", 00:32:07.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.062 "hdgst": ${hdgst:-false}, 00:32:07.062 "ddgst": ${ddgst:-false} 00:32:07.062 }, 00:32:07.062 "method": "bdev_nvme_attach_controller" 00:32:07.062 } 00:32:07.062 EOF 00:32:07.062 )") 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:07.062 { 00:32:07.062 "params": { 00:32:07.062 "name": "Nvme$subsystem", 00:32:07.062 "trtype": "$TEST_TRANSPORT", 00:32:07.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.062 "adrfam": "ipv4", 00:32:07.062 "trsvcid": "$NVMF_PORT", 00:32:07.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.062 "hdgst": ${hdgst:-false}, 00:32:07.062 "ddgst": ${ddgst:-false} 00:32:07.062 }, 00:32:07.062 "method": "bdev_nvme_attach_controller" 00:32:07.062 } 00:32:07.062 EOF 00:32:07.062 )") 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:07.062 "params": { 00:32:07.062 "name": "Nvme0", 00:32:07.062 "trtype": "tcp", 00:32:07.062 "traddr": "10.0.0.2", 00:32:07.062 "adrfam": "ipv4", 00:32:07.062 "trsvcid": "4420", 00:32:07.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:07.062 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:07.062 "hdgst": false, 00:32:07.062 "ddgst": false 00:32:07.062 }, 00:32:07.062 "method": "bdev_nvme_attach_controller" 00:32:07.062 },{ 00:32:07.062 "params": { 00:32:07.062 "name": "Nvme1", 00:32:07.062 "trtype": "tcp", 00:32:07.062 "traddr": "10.0.0.2", 00:32:07.062 "adrfam": "ipv4", 00:32:07.062 "trsvcid": "4420", 00:32:07.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:07.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:07.062 "hdgst": false, 00:32:07.062 "ddgst": false 00:32:07.062 }, 00:32:07.062 "method": "bdev_nvme_attach_controller" 00:32:07.062 },{ 00:32:07.062 "params": { 00:32:07.062 "name": "Nvme2", 00:32:07.062 "trtype": "tcp", 00:32:07.062 "traddr": "10.0.0.2", 00:32:07.062 "adrfam": "ipv4", 00:32:07.062 "trsvcid": "4420", 00:32:07.062 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:07.062 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:07.062 "hdgst": false, 00:32:07.062 "ddgst": false 00:32:07.062 }, 00:32:07.062 "method": "bdev_nvme_attach_controller" 00:32:07.062 }' 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:07.062 09:22:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.062 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:07.062 ... 00:32:07.062 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:07.062 ... 00:32:07.062 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:07.062 ... 00:32:07.062 fio-3.35 00:32:07.062 Starting 24 threads 00:32:19.266 00:32:19.266 filename0: (groupid=0, jobs=1): err= 0: pid=3507212: Wed Nov 20 09:22:55 2024 00:32:19.266 read: IOPS=69, BW=278KiB/s (285kB/s)(2808KiB/10104msec) 00:32:19.266 slat (nsec): min=10474, max=56667, avg=23146.41, stdev=7526.70 00:32:19.266 clat (msec): min=82, max=374, avg=229.35, stdev=48.38 00:32:19.266 lat (msec): min=82, max=374, avg=229.37, stdev=48.38 00:32:19.266 clat percentiles (msec): 00:32:19.267 | 1.00th=[ 93], 5.00th=[ 157], 10.00th=[ 180], 20.00th=[ 203], 00:32:19.267 | 30.00th=[ 209], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 236], 00:32:19.267 | 70.00th=[ 259], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 292], 00:32:19.267 | 99.00th=[ 326], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:32:19.267 | 99.99th=[ 376] 00:32:19.267 bw ( KiB/s): min= 224, max= 384, per=4.51%, avg=274.40, stdev=37.53, samples=20 00:32:19.267 iops : min= 56, max= 96, avg=68.60, stdev= 9.38, samples=20 00:32:19.267 lat (msec) : 100=4.27%, 250=62.11%, 500=33.62% 00:32:19.267 cpu : usr=97.29%, sys=1.95%, ctx=43, majf=0, minf=9 00:32:19.267 IO depths : 1=1.3%, 2=4.7%, 4=16.2%, 8=66.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:32:19.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.267 filename0: (groupid=0, jobs=1): err= 0: pid=3507213: Wed Nov 20 09:22:55 2024 00:32:19.267 read: IOPS=69, BW=277KiB/s (284kB/s)(2800KiB/10104msec) 00:32:19.267 slat (nsec): min=8134, max=79036, avg=22999.92, stdev=15660.95 00:32:19.267 clat (msec): min=92, max=400, avg=230.25, stdev=46.13 00:32:19.267 lat (msec): min=92, max=400, avg=230.28, stdev=46.13 00:32:19.267 clat percentiles (msec): 00:32:19.267 | 1.00th=[ 93], 5.00th=[ 171], 10.00th=[ 197], 20.00th=[ 203], 00:32:19.267 | 30.00th=[ 209], 40.00th=[ 213], 50.00th=[ 218], 60.00th=[ 232], 00:32:19.267 | 70.00th=[ 255], 80.00th=[ 279], 90.00th=[ 284], 95.00th=[ 296], 00:32:19.267 | 99.00th=[ 313], 99.50th=[ 342], 99.90th=[ 401], 99.95th=[ 401], 00:32:19.267 | 99.99th=[ 401] 00:32:19.267 bw ( KiB/s): min= 240, max= 384, per=4.49%, avg=273.60, stdev=35.55, samples=20 00:32:19.267 iops : min= 60, max= 96, avg=68.40, stdev= 8.89, samples=20 00:32:19.267 lat (msec) : 100=4.29%, 250=63.14%, 500=32.57% 00:32:19.267 cpu : usr=98.51%, sys=1.08%, ctx=18, majf=0, minf=9 00:32:19.267 IO depths : 1=2.1%, 2=6.0%, 4=17.7%, 8=63.7%, 16=10.4%, 32=0.0%, >=64=0.0% 00:32:19.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 complete : 0=0.0%, 4=92.0%, 8=2.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 issued rwts: total=700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.267 filename0: (groupid=0, jobs=1): err= 0: pid=3507214: Wed Nov 20 09:22:55 2024 00:32:19.267 read: IOPS=82, BW=331KiB/s (339kB/s)(3344KiB/10107msec) 00:32:19.267 slat (nsec): min=3972, max=71070, avg=11732.37, stdev=8681.97 00:32:19.267 clat (msec): min=87, max=327, avg=193.01, stdev=40.00 00:32:19.267 lat (msec): min=87, max=327, avg=193.02, stdev=40.00 00:32:19.267 clat percentiles (msec): 00:32:19.267 | 1.00th=[ 88], 5.00th=[ 111], 10.00th=[ 150], 20.00th=[ 161], 00:32:19.267 | 30.00th=[ 182], 40.00th=[ 194], 50.00th=[ 203], 60.00th=[ 209], 00:32:19.267 | 70.00th=[ 211], 80.00th=[ 218], 90.00th=[ 230], 95.00th=[ 249], 00:32:19.267 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 330], 99.95th=[ 330], 00:32:19.267 | 99.99th=[ 330] 00:32:19.267 bw ( KiB/s): min= 256, max= 512, per=5.40%, avg=328.00, stdev=61.31, samples=20 00:32:19.267 iops : min= 64, max= 128, avg=82.00, stdev=15.33, samples=20 00:32:19.267 lat (msec) : 100=3.83%, 250=91.63%, 500=4.55% 00:32:19.267 cpu : usr=98.16%, sys=1.29%, ctx=71, majf=0, minf=9 00:32:19.267 IO depths : 1=1.0%, 2=4.1%, 4=15.3%, 8=67.9%, 16=11.7%, 32=0.0%, >=64=0.0% 00:32:19.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 complete : 0=0.0%, 4=91.3%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 issued rwts: total=836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.267 filename0: (groupid=0, jobs=1): err= 0: pid=3507215: Wed Nov 20 09:22:55 2024 00:32:19.267 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10078msec) 00:32:19.267 slat (nsec): min=8160, max=79125, avg=21600.52, stdev=10858.61 00:32:19.267 clat (msec): min=119, max=416, avg=272.18, stdev=53.91 00:32:19.267 lat (msec): min=119, max=416, avg=272.20, stdev=53.92 00:32:19.267 clat percentiles (msec): 00:32:19.267 | 1.00th=[ 121], 5.00th=[ 159], 10.00th=[ 199], 20.00th=[ 218], 00:32:19.267 | 30.00th=[ 249], 40.00th=[ 279], 50.00th=[ 279], 60.00th=[ 296], 00:32:19.267 | 70.00th=[ 309], 80.00th=[ 317], 90.00th=[ 326], 95.00th=[ 334], 00:32:19.267 | 99.00th=[ 368], 99.50th=[ 409], 99.90th=[ 418], 99.95th=[ 418], 00:32:19.267 | 99.99th=[ 418] 00:32:19.267 bw ( KiB/s): min= 128, max= 368, per=3.79%, avg=230.40, stdev=64.08, samples=20 00:32:19.267 iops : min= 32, max= 92, avg=57.60, stdev=16.02, samples=20 00:32:19.267 lat (msec) : 250=31.42%, 500=68.58% 00:32:19.267 cpu : usr=98.21%, sys=1.26%, ctx=77, majf=0, minf=9 00:32:19.267 IO depths : 1=4.1%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:32:19.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.267 filename0: (groupid=0, jobs=1): err= 0: pid=3507216: Wed Nov 20 09:22:55 2024 00:32:19.267 read: IOPS=53, BW=215KiB/s (221kB/s)(2176KiB/10098msec) 00:32:19.267 slat (nsec): min=5272, max=96435, avg=65208.47, stdev=19964.91 00:32:19.267 clat (msec): min=180, max=435, avg=295.01, stdev=37.04 00:32:19.267 lat (msec): min=180, max=435, avg=295.07, stdev=37.04 00:32:19.267 clat percentiles (msec): 00:32:19.267 | 1.00th=[ 197], 5.00th=[ 215], 10.00th=[ 251], 20.00th=[ 271], 00:32:19.267 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 305], 00:32:19.267 | 70.00th=[ 317], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 347], 00:32:19.267 | 99.00th=[ 368], 99.50th=[ 418], 99.90th=[ 435], 99.95th=[ 435], 00:32:19.267 | 99.99th=[ 435] 00:32:19.267 bw ( KiB/s): min= 128, max= 272, per=3.47%, avg=211.20, stdev=58.18, samples=20 00:32:19.267 iops : min= 32, max= 68, avg=52.80, stdev=14.54, samples=20 00:32:19.267 lat (msec) : 250=9.93%, 500=90.07% 00:32:19.267 cpu : usr=98.18%, sys=1.36%, ctx=93, majf=0, minf=10 00:32:19.267 IO depths : 1=1.3%, 2=7.5%, 4=25.0%, 8=55.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:32:19.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.267 filename0: (groupid=0, jobs=1): err= 0: pid=3507217: Wed Nov 20 09:22:55 2024 00:32:19.267 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10099msec) 00:32:19.267 slat (usec): min=6, max=108, avg=70.73, stdev=15.43 00:32:19.267 clat (msec): min=91, max=407, avg=279.91, stdev=58.54 00:32:19.267 lat (msec): min=91, max=407, avg=279.98, stdev=58.55 00:32:19.267 clat percentiles (msec): 00:32:19.267 | 1.00th=[ 92], 5.00th=[ 94], 10.00th=[ 209], 20.00th=[ 251], 00:32:19.267 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 305], 00:32:19.267 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 347], 00:32:19.267 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:32:19.267 | 99.99th=[ 409] 00:32:19.267 bw ( KiB/s): min= 128, max= 384, per=3.69%, avg=224.00, stdev=69.06, samples=20 00:32:19.267 iops : min= 32, max= 96, avg=56.00, stdev=17.27, samples=20 00:32:19.267 lat (msec) : 100=5.56%, 250=13.19%, 500=81.25% 00:32:19.267 cpu : usr=98.22%, sys=1.28%, ctx=44, majf=0, minf=9 00:32:19.267 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:32:19.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.267 filename0: (groupid=0, jobs=1): err= 0: pid=3507218: Wed Nov 20 09:22:55 2024 00:32:19.267 read: IOPS=53, BW=215KiB/s (220kB/s)(2168KiB/10078msec) 00:32:19.267 slat (usec): min=18, max=114, avg=74.17, stdev=14.96 00:32:19.267 clat (msec): min=120, max=452, avg=296.67, stdev=55.55 00:32:19.267 lat (msec): min=120, max=452, avg=296.74, stdev=55.55 00:32:19.267 clat percentiles (msec): 00:32:19.267 | 1.00th=[ 159], 5.00th=[ 199], 10.00th=[ 222], 20.00th=[ 271], 00:32:19.267 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 305], 00:32:19.267 | 70.00th=[ 313], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 414], 00:32:19.267 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 451], 99.95th=[ 451], 00:32:19.267 | 99.99th=[ 451] 00:32:19.267 bw ( KiB/s): min= 128, max= 256, per=3.46%, avg=210.40, stdev=57.40, samples=20 00:32:19.267 iops : min= 32, max= 64, avg=52.60, stdev=14.35, samples=20 00:32:19.267 lat (msec) : 250=16.24%, 500=83.76% 00:32:19.267 cpu : usr=97.08%, sys=1.92%, ctx=192, majf=0, minf=9 00:32:19.267 IO depths : 1=3.3%, 2=9.6%, 4=25.1%, 8=53.0%, 16=9.0%, 32=0.0%, >=64=0.0% 00:32:19.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.267 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.267 filename0: (groupid=0, jobs=1): err= 0: pid=3507219: Wed Nov 20 09:22:55 2024 00:32:19.267 read: IOPS=71, BW=287KiB/s (293kB/s)(2896KiB/10104msec) 00:32:19.267 slat (nsec): min=7755, max=77738, avg=21861.87, stdev=15528.16 00:32:19.267 clat (msec): min=91, max=356, avg=222.32, stdev=45.31 00:32:19.267 lat (msec): min=91, max=356, avg=222.34, stdev=45.32 00:32:19.267 clat percentiles (msec): 00:32:19.267 | 1.00th=[ 93], 5.00th=[ 165], 10.00th=[ 186], 20.00th=[ 197], 00:32:19.267 | 30.00th=[ 205], 40.00th=[ 209], 50.00th=[ 211], 60.00th=[ 224], 00:32:19.267 | 70.00th=[ 245], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 284], 00:32:19.267 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 359], 99.95th=[ 359], 00:32:19.267 | 99.99th=[ 359] 00:32:19.267 bw ( KiB/s): min= 224, max= 384, per=4.66%, avg=283.20, stdev=44.08, samples=20 00:32:19.267 iops : min= 56, max= 96, avg=70.80, stdev=11.02, samples=20 00:32:19.267 lat (msec) : 100=3.87%, 250=68.78%, 500=27.35% 00:32:19.267 cpu : usr=98.47%, sys=1.14%, ctx=23, majf=0, minf=9 00:32:19.267 IO depths : 1=1.5%, 2=4.4%, 4=14.6%, 8=68.2%, 16=11.2%, 32=0.0%, >=64=0.0% 00:32:19.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.267 complete : 0=0.0%, 4=91.1%, 8=3.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 issued rwts: total=724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.268 filename1: (groupid=0, jobs=1): err= 0: pid=3507220: Wed Nov 20 09:22:55 2024 00:32:19.268 read: IOPS=68, BW=275KiB/s (281kB/s)(2776KiB/10106msec) 00:32:19.268 slat (usec): min=8, max=127, avg=29.76, stdev=24.93 00:32:19.268 clat (msec): min=92, max=379, avg=231.43, stdev=48.02 00:32:19.268 lat (msec): min=92, max=379, avg=231.46, stdev=48.03 00:32:19.268 clat percentiles (msec): 00:32:19.268 | 1.00th=[ 93], 5.00th=[ 167], 10.00th=[ 194], 20.00th=[ 201], 00:32:19.268 | 30.00th=[ 209], 40.00th=[ 215], 50.00th=[ 222], 60.00th=[ 236], 00:32:19.268 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 288], 95.00th=[ 309], 00:32:19.268 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 380], 99.95th=[ 380], 00:32:19.268 | 99.99th=[ 380] 00:32:19.268 bw ( KiB/s): min= 144, max= 384, per=4.46%, avg=271.20, stdev=50.72, samples=20 00:32:19.268 iops : min= 36, max= 96, avg=67.80, stdev=12.68, samples=20 00:32:19.268 lat (msec) : 100=4.61%, 250=61.67%, 500=33.72% 00:32:19.268 cpu : usr=98.19%, sys=1.26%, ctx=32, majf=0, minf=9 00:32:19.268 IO depths : 1=2.6%, 2=7.5%, 4=20.9%, 8=59.1%, 16=9.9%, 32=0.0%, >=64=0.0% 00:32:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 complete : 0=0.0%, 4=92.9%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 issued rwts: total=694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.268 filename1: (groupid=0, jobs=1): err= 0: pid=3507221: Wed Nov 20 09:22:55 2024 00:32:19.268 read: IOPS=78, BW=315KiB/s (323kB/s)(3184KiB/10104msec) 00:32:19.268 slat (nsec): min=8029, max=73454, avg=14618.06, stdev=11765.29 00:32:19.268 clat (msec): min=82, max=297, avg=201.73, stdev=31.70 00:32:19.268 lat (msec): min=82, max=297, avg=201.74, stdev=31.70 00:32:19.268 clat percentiles (msec): 00:32:19.268 | 1.00th=[ 93], 5.00th=[ 131], 10.00th=[ 176], 20.00th=[ 190], 00:32:19.268 | 30.00th=[ 197], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 209], 00:32:19.268 | 70.00th=[ 211], 80.00th=[ 220], 90.00th=[ 236], 95.00th=[ 247], 00:32:19.268 | 99.00th=[ 259], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:32:19.268 | 99.99th=[ 296] 00:32:19.268 bw ( KiB/s): min= 256, max= 384, per=5.20%, avg=316.00, stdev=45.51, samples=20 00:32:19.268 iops : min= 64, max= 96, avg=79.00, stdev=11.38, samples=20 00:32:19.268 lat (msec) : 100=3.77%, 250=92.21%, 500=4.02% 00:32:19.268 cpu : usr=98.60%, sys=0.97%, ctx=17, majf=0, minf=9 00:32:19.268 IO depths : 1=1.1%, 2=2.8%, 4=11.1%, 8=73.6%, 16=11.4%, 32=0.0%, >=64=0.0% 00:32:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 complete : 0=0.0%, 4=90.1%, 8=4.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 issued rwts: total=796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.268 filename1: (groupid=0, jobs=1): err= 0: pid=3507222: Wed Nov 20 09:22:55 2024 00:32:19.268 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10081msec) 00:32:19.268 slat (usec): min=16, max=113, avg=71.10, stdev=13.30 00:32:19.268 clat (msec): min=157, max=450, avg=295.90, stdev=50.85 00:32:19.268 lat (msec): min=157, max=450, avg=295.97, stdev=50.84 00:32:19.268 clat percentiles (msec): 00:32:19.268 | 1.00th=[ 159], 5.00th=[ 215], 10.00th=[ 224], 20.00th=[ 271], 00:32:19.268 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 305], 00:32:19.268 | 70.00th=[ 313], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 401], 00:32:19.268 | 99.00th=[ 430], 99.50th=[ 435], 99.90th=[ 451], 99.95th=[ 451], 00:32:19.268 | 99.99th=[ 451] 00:32:19.268 bw ( KiB/s): min= 128, max= 256, per=3.47%, avg=211.20, stdev=57.95, samples=20 00:32:19.268 iops : min= 32, max= 64, avg=52.80, stdev=14.49, samples=20 00:32:19.268 lat (msec) : 250=14.71%, 500=85.29% 00:32:19.268 cpu : usr=98.02%, sys=1.38%, ctx=30, majf=0, minf=9 00:32:19.268 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:32:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.268 filename1: (groupid=0, jobs=1): err= 0: pid=3507223: Wed Nov 20 09:22:55 2024 00:32:19.268 read: IOPS=78, BW=315KiB/s (323kB/s)(3184KiB/10104msec) 00:32:19.268 slat (nsec): min=8084, max=74326, avg=15447.42, stdev=11033.93 00:32:19.268 clat (msec): min=92, max=353, avg=202.18, stdev=35.79 00:32:19.268 lat (msec): min=92, max=353, avg=202.20, stdev=35.79 00:32:19.268 clat percentiles (msec): 00:32:19.268 | 1.00th=[ 93], 5.00th=[ 142], 10.00th=[ 163], 20.00th=[ 182], 00:32:19.268 | 30.00th=[ 197], 40.00th=[ 199], 50.00th=[ 205], 60.00th=[ 211], 00:32:19.268 | 70.00th=[ 218], 80.00th=[ 220], 90.00th=[ 236], 95.00th=[ 251], 00:32:19.268 | 99.00th=[ 317], 99.50th=[ 355], 99.90th=[ 355], 99.95th=[ 355], 00:32:19.268 | 99.99th=[ 355] 00:32:19.268 bw ( KiB/s): min= 256, max= 384, per=5.17%, avg=314.40, stdev=55.73, samples=20 00:32:19.268 iops : min= 64, max= 96, avg=78.60, stdev=13.93, samples=20 00:32:19.268 lat (msec) : 100=3.77%, 250=90.45%, 500=5.78% 00:32:19.268 cpu : usr=98.55%, sys=1.04%, ctx=19, majf=0, minf=9 00:32:19.268 IO depths : 1=0.9%, 2=6.0%, 4=21.6%, 8=59.8%, 16=11.7%, 32=0.0%, >=64=0.0% 00:32:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 complete : 0=0.0%, 4=93.3%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 issued rwts: total=796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.268 filename1: (groupid=0, jobs=1): err= 0: pid=3507224: Wed Nov 20 09:22:55 2024 00:32:19.268 read: IOPS=64, BW=256KiB/s (262kB/s)(2584KiB/10089msec) 00:32:19.268 slat (nsec): min=7279, max=64688, avg=21387.50, stdev=12199.77 00:32:19.268 clat (msec): min=110, max=429, avg=248.95, stdev=53.61 00:32:19.268 lat (msec): min=110, max=429, avg=248.97, stdev=53.61 00:32:19.268 clat percentiles (msec): 00:32:19.268 | 1.00th=[ 111], 5.00th=[ 155], 10.00th=[ 197], 20.00th=[ 209], 00:32:19.268 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 243], 60.00th=[ 271], 00:32:19.268 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 317], 95.00th=[ 330], 00:32:19.268 | 99.00th=[ 380], 99.50th=[ 401], 99.90th=[ 430], 99.95th=[ 430], 00:32:19.268 | 99.99th=[ 430] 00:32:19.268 bw ( KiB/s): min= 128, max= 304, per=4.15%, avg=252.00, stdev=31.52, samples=20 00:32:19.268 iops : min= 32, max= 76, avg=63.00, stdev= 7.88, samples=20 00:32:19.268 lat (msec) : 250=52.63%, 500=47.37% 00:32:19.268 cpu : usr=98.27%, sys=1.36%, ctx=14, majf=0, minf=9 00:32:19.268 IO depths : 1=2.2%, 2=7.3%, 4=21.5%, 8=58.7%, 16=10.4%, 32=0.0%, >=64=0.0% 00:32:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.268 filename1: (groupid=0, jobs=1): err= 0: pid=3507225: Wed Nov 20 09:22:55 2024 00:32:19.268 read: IOPS=61, BW=248KiB/s (254kB/s)(2496KiB/10079msec) 00:32:19.268 slat (nsec): min=8219, max=61435, avg=21401.04, stdev=9774.13 00:32:19.268 clat (msec): min=119, max=412, avg=257.50, stdev=56.10 00:32:19.268 lat (msec): min=119, max=412, avg=257.53, stdev=56.10 00:32:19.268 clat percentiles (msec): 00:32:19.268 | 1.00th=[ 121], 5.00th=[ 157], 10.00th=[ 194], 20.00th=[ 215], 00:32:19.268 | 30.00th=[ 220], 40.00th=[ 230], 50.00th=[ 271], 60.00th=[ 279], 00:32:19.268 | 70.00th=[ 292], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 334], 00:32:19.268 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 414], 99.95th=[ 414], 00:32:19.268 | 99.99th=[ 414] 00:32:19.268 bw ( KiB/s): min= 128, max= 368, per=4.03%, avg=245.60, stdev=54.02, samples=20 00:32:19.268 iops : min= 32, max= 92, avg=61.40, stdev=13.50, samples=20 00:32:19.268 lat (msec) : 250=46.79%, 500=53.21% 00:32:19.268 cpu : usr=98.42%, sys=1.16%, ctx=26, majf=0, minf=9 00:32:19.268 IO depths : 1=2.1%, 2=8.0%, 4=24.0%, 8=55.4%, 16=10.4%, 32=0.0%, >=64=0.0% 00:32:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.268 filename1: (groupid=0, jobs=1): err= 0: pid=3507226: Wed Nov 20 09:22:55 2024 00:32:19.268 read: IOPS=53, BW=215KiB/s (220kB/s)(2168KiB/10078msec) 00:32:19.268 slat (usec): min=16, max=110, avg=72.66, stdev=15.39 00:32:19.268 clat (msec): min=112, max=530, avg=296.68, stdev=57.64 00:32:19.268 lat (msec): min=112, max=530, avg=296.75, stdev=57.64 00:32:19.268 clat percentiles (msec): 00:32:19.268 | 1.00th=[ 142], 5.00th=[ 197], 10.00th=[ 222], 20.00th=[ 271], 00:32:19.268 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 305], 00:32:19.268 | 70.00th=[ 313], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 414], 00:32:19.268 | 99.00th=[ 435], 99.50th=[ 451], 99.90th=[ 531], 99.95th=[ 531], 00:32:19.268 | 99.99th=[ 531] 00:32:19.268 bw ( KiB/s): min= 112, max= 272, per=3.46%, avg=210.40, stdev=59.48, samples=20 00:32:19.268 iops : min= 28, max= 68, avg=52.60, stdev=14.87, samples=20 00:32:19.268 lat (msec) : 250=16.24%, 500=83.39%, 750=0.37% 00:32:19.268 cpu : usr=98.01%, sys=1.35%, ctx=38, majf=0, minf=9 00:32:19.268 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:32:19.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.268 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.268 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.268 filename1: (groupid=0, jobs=1): err= 0: pid=3507227: Wed Nov 20 09:22:55 2024 00:32:19.268 read: IOPS=54, BW=217KiB/s (222kB/s)(2176KiB/10018msec) 00:32:19.268 slat (usec): min=24, max=109, avg=70.70, stdev=14.23 00:32:19.268 clat (msec): min=181, max=444, avg=294.03, stdev=38.52 00:32:19.268 lat (msec): min=181, max=445, avg=294.10, stdev=38.52 00:32:19.268 clat percentiles (msec): 00:32:19.268 | 1.00th=[ 203], 5.00th=[ 215], 10.00th=[ 247], 20.00th=[ 271], 00:32:19.268 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 305], 00:32:19.269 | 70.00th=[ 313], 80.00th=[ 321], 90.00th=[ 338], 95.00th=[ 347], 00:32:19.269 | 99.00th=[ 435], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:32:19.269 | 99.99th=[ 447] 00:32:19.269 bw ( KiB/s): min= 128, max= 384, per=3.47%, avg=211.20, stdev=72.60, samples=20 00:32:19.269 iops : min= 32, max= 96, avg=52.80, stdev=18.15, samples=20 00:32:19.269 lat (msec) : 250=11.76%, 500=88.24% 00:32:19.269 cpu : usr=98.27%, sys=1.22%, ctx=19, majf=0, minf=9 00:32:19.269 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:32:19.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.269 filename2: (groupid=0, jobs=1): err= 0: pid=3507228: Wed Nov 20 09:22:55 2024 00:32:19.269 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10096msec) 00:32:19.269 slat (usec): min=14, max=120, avg=71.57, stdev=12.74 00:32:19.269 clat (msec): min=141, max=417, avg=287.84, stdev=44.77 00:32:19.269 lat (msec): min=141, max=417, avg=287.91, stdev=44.77 00:32:19.269 clat percentiles (msec): 00:32:19.269 | 1.00th=[ 159], 5.00th=[ 199], 10.00th=[ 218], 20.00th=[ 268], 00:32:19.269 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 300], 00:32:19.269 | 70.00th=[ 313], 80.00th=[ 321], 90.00th=[ 334], 95.00th=[ 347], 00:32:19.269 | 99.00th=[ 401], 99.50th=[ 414], 99.90th=[ 418], 99.95th=[ 418], 00:32:19.269 | 99.99th=[ 418] 00:32:19.269 bw ( KiB/s): min= 128, max= 256, per=3.57%, avg=217.60, stdev=58.59, samples=20 00:32:19.269 iops : min= 32, max= 64, avg=54.40, stdev=14.65, samples=20 00:32:19.269 lat (msec) : 250=18.93%, 500=81.07% 00:32:19.269 cpu : usr=98.30%, sys=1.26%, ctx=18, majf=0, minf=9 00:32:19.269 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:32:19.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.269 filename2: (groupid=0, jobs=1): err= 0: pid=3507229: Wed Nov 20 09:22:55 2024 00:32:19.269 read: IOPS=68, BW=275KiB/s (282kB/s)(2776KiB/10090msec) 00:32:19.269 slat (nsec): min=8015, max=83681, avg=19870.81, stdev=14955.53 00:32:19.269 clat (msec): min=118, max=404, avg=231.70, stdev=42.98 00:32:19.269 lat (msec): min=118, max=404, avg=231.72, stdev=42.98 00:32:19.269 clat percentiles (msec): 00:32:19.269 | 1.00th=[ 120], 5.00th=[ 190], 10.00th=[ 199], 20.00th=[ 203], 00:32:19.269 | 30.00th=[ 209], 40.00th=[ 213], 50.00th=[ 215], 60.00th=[ 226], 00:32:19.269 | 70.00th=[ 247], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 309], 00:32:19.269 | 99.00th=[ 351], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:32:19.269 | 99.99th=[ 405] 00:32:19.269 bw ( KiB/s): min= 128, max= 384, per=4.49%, avg=273.60, stdev=56.84, samples=20 00:32:19.269 iops : min= 32, max= 96, avg=68.40, stdev=14.21, samples=20 00:32:19.269 lat (msec) : 250=71.18%, 500=28.82% 00:32:19.269 cpu : usr=98.30%, sys=1.22%, ctx=42, majf=0, minf=9 00:32:19.269 IO depths : 1=1.4%, 2=4.6%, 4=15.7%, 8=67.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:32:19.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 complete : 0=0.0%, 4=91.5%, 8=3.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 issued rwts: total=694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.269 filename2: (groupid=0, jobs=1): err= 0: pid=3507230: Wed Nov 20 09:22:55 2024 00:32:19.269 read: IOPS=60, BW=242KiB/s (248kB/s)(2432KiB/10044msec) 00:32:19.269 slat (usec): min=8, max=119, avg=60.95, stdev=20.14 00:32:19.269 clat (msec): min=91, max=450, avg=263.78, stdev=61.94 00:32:19.269 lat (msec): min=91, max=450, avg=263.85, stdev=61.95 00:32:19.269 clat percentiles (msec): 00:32:19.269 | 1.00th=[ 92], 5.00th=[ 100], 10.00th=[ 197], 20.00th=[ 209], 00:32:19.269 | 30.00th=[ 236], 40.00th=[ 266], 50.00th=[ 279], 60.00th=[ 284], 00:32:19.269 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 334], 95.00th=[ 342], 00:32:19.269 | 99.00th=[ 351], 99.50th=[ 397], 99.90th=[ 451], 99.95th=[ 451], 00:32:19.269 | 99.99th=[ 451] 00:32:19.269 bw ( KiB/s): min= 128, max= 384, per=3.88%, avg=236.80, stdev=61.33, samples=20 00:32:19.269 iops : min= 32, max= 96, avg=59.20, stdev=15.33, samples=20 00:32:19.269 lat (msec) : 100=5.26%, 250=30.76%, 500=63.98% 00:32:19.269 cpu : usr=97.93%, sys=1.44%, ctx=43, majf=0, minf=9 00:32:19.269 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:32:19.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.269 filename2: (groupid=0, jobs=1): err= 0: pid=3507231: Wed Nov 20 09:22:55 2024 00:32:19.269 read: IOPS=72, BW=291KiB/s (298kB/s)(2944KiB/10104msec) 00:32:19.269 slat (usec): min=8, max=105, avg=25.50, stdev=23.49 00:32:19.269 clat (msec): min=82, max=411, avg=218.29, stdev=43.97 00:32:19.269 lat (msec): min=82, max=411, avg=218.31, stdev=43.98 00:32:19.269 clat percentiles (msec): 00:32:19.269 | 1.00th=[ 93], 5.00th=[ 165], 10.00th=[ 182], 20.00th=[ 197], 00:32:19.269 | 30.00th=[ 203], 40.00th=[ 209], 50.00th=[ 215], 60.00th=[ 220], 00:32:19.269 | 70.00th=[ 224], 80.00th=[ 245], 90.00th=[ 275], 95.00th=[ 288], 00:32:19.269 | 99.00th=[ 330], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 414], 00:32:19.269 | 99.99th=[ 414] 00:32:19.269 bw ( KiB/s): min= 256, max= 384, per=4.74%, avg=288.00, stdev=51.65, samples=20 00:32:19.269 iops : min= 64, max= 96, avg=72.00, stdev=12.91, samples=20 00:32:19.269 lat (msec) : 100=4.08%, 250=76.63%, 500=19.29% 00:32:19.269 cpu : usr=98.19%, sys=1.15%, ctx=75, majf=0, minf=9 00:32:19.269 IO depths : 1=1.2%, 2=7.5%, 4=25.0%, 8=55.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:32:19.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.269 filename2: (groupid=0, jobs=1): err= 0: pid=3507232: Wed Nov 20 09:22:55 2024 00:32:19.269 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10079msec) 00:32:19.269 slat (usec): min=5, max=103, avg=67.76, stdev=18.55 00:32:19.269 clat (msec): min=166, max=412, avg=295.82, stdev=38.57 00:32:19.269 lat (msec): min=166, max=412, avg=295.89, stdev=38.57 00:32:19.269 clat percentiles (msec): 00:32:19.269 | 1.00th=[ 197], 5.00th=[ 234], 10.00th=[ 245], 20.00th=[ 271], 00:32:19.269 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 300], 60.00th=[ 309], 00:32:19.269 | 70.00th=[ 317], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 351], 00:32:19.269 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 414], 99.95th=[ 414], 00:32:19.269 | 99.99th=[ 414] 00:32:19.269 bw ( KiB/s): min= 128, max= 256, per=3.47%, avg=211.20, stdev=61.11, samples=20 00:32:19.269 iops : min= 32, max= 64, avg=52.80, stdev=15.28, samples=20 00:32:19.269 lat (msec) : 250=10.48%, 500=89.52% 00:32:19.269 cpu : usr=98.10%, sys=1.36%, ctx=41, majf=0, minf=9 00:32:19.269 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:32:19.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.269 filename2: (groupid=0, jobs=1): err= 0: pid=3507233: Wed Nov 20 09:22:55 2024 00:32:19.269 read: IOPS=71, BW=284KiB/s (291kB/s)(2872KiB/10104msec) 00:32:19.269 slat (usec): min=8, max=103, avg=19.43, stdev=13.50 00:32:19.269 clat (msec): min=92, max=348, avg=224.22, stdev=45.13 00:32:19.269 lat (msec): min=92, max=348, avg=224.24, stdev=45.13 00:32:19.269 clat percentiles (msec): 00:32:19.269 | 1.00th=[ 93], 5.00th=[ 157], 10.00th=[ 188], 20.00th=[ 201], 00:32:19.269 | 30.00th=[ 209], 40.00th=[ 211], 50.00th=[ 218], 60.00th=[ 230], 00:32:19.269 | 70.00th=[ 243], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 288], 00:32:19.269 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 351], 00:32:19.269 | 99.99th=[ 351] 00:32:19.269 bw ( KiB/s): min= 224, max= 384, per=4.61%, avg=280.80, stdev=39.35, samples=20 00:32:19.269 iops : min= 56, max= 96, avg=70.20, stdev= 9.84, samples=20 00:32:19.269 lat (msec) : 100=4.46%, 250=70.19%, 500=25.35% 00:32:19.269 cpu : usr=98.41%, sys=1.07%, ctx=55, majf=0, minf=9 00:32:19.269 IO depths : 1=2.1%, 2=5.0%, 4=14.8%, 8=67.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:32:19.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 complete : 0=0.0%, 4=91.1%, 8=3.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.269 issued rwts: total=718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.269 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.269 filename2: (groupid=0, jobs=1): err= 0: pid=3507234: Wed Nov 20 09:22:55 2024 00:32:19.269 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10089msec) 00:32:19.269 slat (usec): min=16, max=104, avg=72.50, stdev=11.94 00:32:19.269 clat (msec): min=156, max=445, avg=294.62, stdev=47.30 00:32:19.269 lat (msec): min=156, max=445, avg=294.69, stdev=47.30 00:32:19.269 clat percentiles (msec): 00:32:19.269 | 1.00th=[ 192], 5.00th=[ 213], 10.00th=[ 215], 20.00th=[ 271], 00:32:19.269 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 292], 60.00th=[ 305], 00:32:19.269 | 70.00th=[ 317], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 355], 00:32:19.269 | 99.00th=[ 439], 99.50th=[ 443], 99.90th=[ 447], 99.95th=[ 447], 00:32:19.269 | 99.99th=[ 447] 00:32:19.269 bw ( KiB/s): min= 128, max= 256, per=3.47%, avg=211.20, stdev=59.55, samples=20 00:32:19.269 iops : min= 32, max= 64, avg=52.80, stdev=14.89, samples=20 00:32:19.269 lat (msec) : 250=14.15%, 500=85.85% 00:32:19.269 cpu : usr=98.16%, sys=1.29%, ctx=30, majf=0, minf=9 00:32:19.269 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:32:19.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.270 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.270 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.270 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.270 filename2: (groupid=0, jobs=1): err= 0: pid=3507235: Wed Nov 20 09:22:55 2024 00:32:19.270 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10078msec) 00:32:19.270 slat (nsec): min=8758, max=52754, avg=16904.36, stdev=6335.56 00:32:19.270 clat (msec): min=118, max=436, avg=296.24, stdev=48.89 00:32:19.270 lat (msec): min=118, max=436, avg=296.26, stdev=48.89 00:32:19.270 clat percentiles (msec): 00:32:19.270 | 1.00th=[ 161], 5.00th=[ 215], 10.00th=[ 249], 20.00th=[ 275], 00:32:19.270 | 30.00th=[ 288], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 305], 00:32:19.270 | 70.00th=[ 313], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 355], 00:32:19.270 | 99.00th=[ 426], 99.50th=[ 435], 99.90th=[ 439], 99.95th=[ 439], 00:32:19.270 | 99.99th=[ 439] 00:32:19.270 bw ( KiB/s): min= 128, max= 256, per=3.47%, avg=211.20, stdev=61.11, samples=20 00:32:19.270 iops : min= 32, max= 64, avg=52.80, stdev=15.28, samples=20 00:32:19.270 lat (msec) : 250=12.87%, 500=87.13% 00:32:19.270 cpu : usr=98.39%, sys=1.22%, ctx=16, majf=0, minf=9 00:32:19.270 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:32:19.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.270 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.270 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.270 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:19.270 00:32:19.270 Run status group 0 (all jobs): 00:32:19.270 READ: bw=6075KiB/s (6221kB/s), 215KiB/s-331KiB/s (220kB/s-339kB/s), io=60.0MiB (62.9MB), run=10018-10107msec 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 bdev_null0 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 [2024-11-20 09:22:55.670846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 bdev_null1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:19.270 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:19.270 { 00:32:19.270 "params": { 00:32:19.270 "name": "Nvme$subsystem", 00:32:19.270 "trtype": "$TEST_TRANSPORT", 00:32:19.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:19.270 "adrfam": "ipv4", 00:32:19.270 "trsvcid": "$NVMF_PORT", 00:32:19.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:19.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:19.270 "hdgst": ${hdgst:-false}, 00:32:19.270 "ddgst": ${ddgst:-false} 00:32:19.270 }, 00:32:19.270 "method": "bdev_nvme_attach_controller" 00:32:19.270 } 00:32:19.270 EOF 00:32:19.270 )") 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:19.271 { 00:32:19.271 "params": { 00:32:19.271 "name": "Nvme$subsystem", 00:32:19.271 "trtype": "$TEST_TRANSPORT", 00:32:19.271 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:19.271 "adrfam": "ipv4", 00:32:19.271 "trsvcid": "$NVMF_PORT", 00:32:19.271 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:19.271 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:19.271 "hdgst": ${hdgst:-false}, 00:32:19.271 "ddgst": ${ddgst:-false} 00:32:19.271 }, 00:32:19.271 "method": "bdev_nvme_attach_controller" 00:32:19.271 } 00:32:19.271 EOF 00:32:19.271 )") 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:19.271 "params": { 00:32:19.271 "name": "Nvme0", 00:32:19.271 "trtype": "tcp", 00:32:19.271 "traddr": "10.0.0.2", 00:32:19.271 "adrfam": "ipv4", 00:32:19.271 "trsvcid": "4420", 00:32:19.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.271 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:19.271 "hdgst": false, 00:32:19.271 "ddgst": false 00:32:19.271 }, 00:32:19.271 "method": "bdev_nvme_attach_controller" 00:32:19.271 },{ 00:32:19.271 "params": { 00:32:19.271 "name": "Nvme1", 00:32:19.271 "trtype": "tcp", 00:32:19.271 "traddr": "10.0.0.2", 00:32:19.271 "adrfam": "ipv4", 00:32:19.271 "trsvcid": "4420", 00:32:19.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:19.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:19.271 "hdgst": false, 00:32:19.271 "ddgst": false 00:32:19.271 }, 00:32:19.271 "method": "bdev_nvme_attach_controller" 00:32:19.271 }' 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:19.271 09:22:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:19.271 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:19.271 ... 00:32:19.271 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:19.271 ... 00:32:19.271 fio-3.35 00:32:19.271 Starting 4 threads 00:32:25.887 00:32:25.887 filename0: (groupid=0, jobs=1): err= 0: pid=3508617: Wed Nov 20 09:23:01 2024 00:32:25.887 read: IOPS=1872, BW=14.6MiB/s (15.3MB/s)(73.2MiB/5003msec) 00:32:25.887 slat (nsec): min=5147, max=67280, avg=17452.45, stdev=8639.49 00:32:25.887 clat (usec): min=833, max=7871, avg=4216.86, stdev=423.83 00:32:25.887 lat (usec): min=851, max=7892, avg=4234.31, stdev=423.79 00:32:25.887 clat percentiles (usec): 00:32:25.887 | 1.00th=[ 3032], 5.00th=[ 3720], 10.00th=[ 3884], 20.00th=[ 4015], 00:32:25.887 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:32:25.887 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4817], 00:32:25.887 | 99.00th=[ 5800], 99.50th=[ 6194], 99.90th=[ 7242], 99.95th=[ 7373], 00:32:25.887 | 99.99th=[ 7898] 00:32:25.887 bw ( KiB/s): min=14560, max=15328, per=24.88%, avg=14972.80, stdev=245.19, samples=10 00:32:25.887 iops : min= 1820, max= 1916, avg=1871.60, stdev=30.65, samples=10 00:32:25.887 lat (usec) : 1000=0.03% 00:32:25.887 lat (msec) : 2=0.17%, 4=17.60%, 10=82.20% 00:32:25.887 cpu : usr=96.06%, sys=3.48%, ctx=9, majf=0, minf=29 00:32:25.887 IO depths : 1=0.4%, 2=12.1%, 4=60.2%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.888 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.888 issued rwts: total=9366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.888 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:25.888 filename0: (groupid=0, jobs=1): err= 0: pid=3508618: Wed Nov 20 09:23:01 2024 00:32:25.888 read: IOPS=1866, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5003msec) 00:32:25.888 slat (nsec): min=4137, max=70349, avg=19789.60, stdev=9062.53 00:32:25.888 clat (usec): min=592, max=7560, avg=4208.67, stdev=602.35 00:32:25.888 lat (usec): min=606, max=7588, avg=4228.46, stdev=602.69 00:32:25.888 clat percentiles (usec): 00:32:25.888 | 1.00th=[ 1614], 5.00th=[ 3687], 10.00th=[ 3916], 20.00th=[ 4047], 00:32:25.888 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:32:25.888 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 5080], 00:32:25.888 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7373], 99.95th=[ 7504], 00:32:25.888 | 99.99th=[ 7570] 00:32:25.888 bw ( KiB/s): min=14352, max=15216, per=24.80%, avg=14926.40, stdev=246.89, samples=10 00:32:25.888 iops : min= 1794, max= 1902, avg=1865.80, stdev=30.86, samples=10 00:32:25.888 lat (usec) : 750=0.01%, 1000=0.21% 00:32:25.888 lat (msec) : 2=1.10%, 4=14.47%, 10=84.20% 00:32:25.888 cpu : usr=96.10%, sys=3.38%, ctx=8, majf=0, minf=45 00:32:25.888 IO depths : 1=0.2%, 2=20.9%, 4=53.0%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.888 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.888 issued rwts: total=9337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.888 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:25.888 filename1: (groupid=0, jobs=1): err= 0: pid=3508619: Wed Nov 20 09:23:01 2024 00:32:25.888 read: IOPS=1883, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5002msec) 00:32:25.888 slat (nsec): min=4062, max=92213, avg=20295.02, stdev=9781.03 00:32:25.888 clat (usec): min=856, max=7600, avg=4166.04, stdev=502.39 00:32:25.888 lat (usec): min=870, max=7615, avg=4186.33, stdev=503.19 00:32:25.888 clat percentiles (usec): 00:32:25.888 | 1.00th=[ 2245], 5.00th=[ 3589], 10.00th=[ 3851], 20.00th=[ 4015], 00:32:25.888 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4228], 00:32:25.888 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4686], 00:32:25.888 | 99.00th=[ 6390], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7373], 00:32:25.888 | 99.99th=[ 7570] 00:32:25.888 bw ( KiB/s): min=14864, max=15280, per=25.04%, avg=15067.00, stdev=150.03, samples=10 00:32:25.888 iops : min= 1858, max= 1910, avg=1883.30, stdev=18.66, samples=10 00:32:25.888 lat (usec) : 1000=0.07% 00:32:25.888 lat (msec) : 2=0.54%, 4=18.02%, 10=81.36% 00:32:25.888 cpu : usr=91.76%, sys=5.36%, ctx=179, majf=0, minf=43 00:32:25.888 IO depths : 1=0.5%, 2=20.9%, 4=53.1%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.888 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.888 issued rwts: total=9423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.888 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:25.888 filename1: (groupid=0, jobs=1): err= 0: pid=3508620: Wed Nov 20 09:23:01 2024 00:32:25.888 read: IOPS=1901, BW=14.9MiB/s (15.6MB/s)(74.3MiB/5001msec) 00:32:25.888 slat (nsec): min=4313, max=92225, avg=19357.42, stdev=9573.69 00:32:25.888 clat (usec): min=789, max=7694, avg=4133.50, stdev=513.18 00:32:25.888 lat (usec): min=802, max=7708, avg=4152.85, stdev=514.36 00:32:25.888 clat percentiles (usec): 00:32:25.888 | 1.00th=[ 2180], 5.00th=[ 3458], 10.00th=[ 3752], 20.00th=[ 3982], 00:32:25.888 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4228], 00:32:25.888 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:32:25.888 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 7242], 99.95th=[ 7504], 00:32:25.888 | 99.99th=[ 7701] 00:32:25.888 bw ( KiB/s): min=14800, max=15664, per=25.26%, avg=15201.40, stdev=305.93, samples=10 00:32:25.888 iops : min= 1850, max= 1958, avg=1900.10, stdev=38.23, samples=10 00:32:25.888 lat (usec) : 1000=0.07% 00:32:25.888 lat (msec) : 2=0.72%, 4=20.91%, 10=78.30% 00:32:25.888 cpu : usr=96.46%, sys=3.06%, ctx=8, majf=0, minf=58 00:32:25.888 IO depths : 1=0.5%, 2=20.3%, 4=53.2%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.888 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.888 issued rwts: total=9507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.888 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:25.888 00:32:25.888 Run status group 0 (all jobs): 00:32:25.888 READ: bw=58.8MiB/s (61.6MB/s), 14.6MiB/s-14.9MiB/s (15.3MB/s-15.6MB/s), io=294MiB (308MB), run=5001-5003msec 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.888 00:32:25.888 real 0m24.962s 00:32:25.888 user 4m35.933s 00:32:25.888 sys 0m5.762s 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:25.888 09:23:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:25.888 ************************************ 00:32:25.888 END TEST fio_dif_rand_params 00:32:25.888 ************************************ 00:32:25.888 09:23:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:25.888 09:23:02 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:25.888 09:23:02 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:25.888 09:23:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:25.888 ************************************ 00:32:25.888 START TEST fio_dif_digest 00:32:25.888 ************************************ 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:25.888 bdev_null0 00:32:25.888 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:25.889 [2024-11-20 09:23:02.224229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:25.889 { 00:32:25.889 "params": { 00:32:25.889 "name": "Nvme$subsystem", 00:32:25.889 "trtype": "$TEST_TRANSPORT", 00:32:25.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.889 "adrfam": "ipv4", 00:32:25.889 "trsvcid": "$NVMF_PORT", 00:32:25.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.889 "hdgst": ${hdgst:-false}, 00:32:25.889 "ddgst": ${ddgst:-false} 00:32:25.889 }, 00:32:25.889 "method": "bdev_nvme_attach_controller" 00:32:25.889 } 00:32:25.889 EOF 00:32:25.889 )") 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:25.889 "params": { 00:32:25.889 "name": "Nvme0", 00:32:25.889 "trtype": "tcp", 00:32:25.889 "traddr": "10.0.0.2", 00:32:25.889 "adrfam": "ipv4", 00:32:25.889 "trsvcid": "4420", 00:32:25.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.889 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.889 "hdgst": true, 00:32:25.889 "ddgst": true 00:32:25.889 }, 00:32:25.889 "method": "bdev_nvme_attach_controller" 00:32:25.889 }' 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:25.889 09:23:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.889 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:25.889 ... 00:32:25.889 fio-3.35 00:32:25.889 Starting 3 threads 00:32:38.075 00:32:38.075 filename0: (groupid=0, jobs=1): err= 0: pid=3509491: Wed Nov 20 09:23:13 2024 00:32:38.075 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(272MiB/10047msec) 00:32:38.075 slat (nsec): min=4496, max=47317, avg=18793.08, stdev=4125.69 00:32:38.075 clat (usec): min=9940, max=52859, avg=13828.62, stdev=1483.87 00:32:38.075 lat (usec): min=9964, max=52878, avg=13847.42, stdev=1483.87 00:32:38.075 clat percentiles (usec): 00:32:38.075 | 1.00th=[11469], 5.00th=[12125], 10.00th=[12518], 20.00th=[13042], 00:32:38.075 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:32:38.075 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:32:38.075 | 99.00th=[16319], 99.50th=[16712], 99.90th=[17433], 99.95th=[47973], 00:32:38.075 | 99.99th=[52691] 00:32:38.075 bw ( KiB/s): min=26624, max=29184, per=35.11%, avg=27776.00, stdev=677.31, samples=20 00:32:38.075 iops : min= 208, max= 228, avg=217.00, stdev= 5.29, samples=20 00:32:38.075 lat (msec) : 10=0.05%, 20=99.86%, 50=0.05%, 100=0.05% 00:32:38.075 cpu : usr=94.83%, sys=4.67%, ctx=23, majf=0, minf=106 00:32:38.075 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.075 issued rwts: total=2173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.075 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:38.075 filename0: (groupid=0, jobs=1): err= 0: pid=3509492: Wed Nov 20 09:23:13 2024 00:32:38.075 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(250MiB/10046msec) 00:32:38.075 slat (nsec): min=4556, max=46862, avg=19092.31, stdev=4406.22 00:32:38.075 clat (usec): min=12521, max=52480, avg=15024.09, stdev=1460.41 00:32:38.075 lat (usec): min=12541, max=52501, avg=15043.18, stdev=1460.54 00:32:38.075 clat percentiles (usec): 00:32:38.075 | 1.00th=[12911], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:32:38.075 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:32:38.075 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16581], 00:32:38.075 | 99.00th=[17433], 99.50th=[17695], 99.90th=[47449], 99.95th=[52691], 00:32:38.075 | 99.99th=[52691] 00:32:38.075 bw ( KiB/s): min=24576, max=26112, per=32.33%, avg=25574.40, stdev=430.78, samples=20 00:32:38.075 iops : min= 192, max= 204, avg=199.80, stdev= 3.37, samples=20 00:32:38.075 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:32:38.075 cpu : usr=95.29%, sys=4.15%, ctx=18, majf=0, minf=182 00:32:38.075 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.075 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.075 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:38.075 filename0: (groupid=0, jobs=1): err= 0: pid=3509493: Wed Nov 20 09:23:13 2024 00:32:38.075 read: IOPS=202, BW=25.3MiB/s (26.6MB/s)(255MiB/10045msec) 00:32:38.075 slat (nsec): min=4471, max=47171, avg=17639.50, stdev=3799.11 00:32:38.075 clat (usec): min=10834, max=53327, avg=14759.41, stdev=1517.15 00:32:38.075 lat (usec): min=10853, max=53341, avg=14777.05, stdev=1517.13 00:32:38.075 clat percentiles (usec): 00:32:38.075 | 1.00th=[12518], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:32:38.075 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:32:38.075 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16319], 00:32:38.075 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[50070], 00:32:38.075 | 99.99th=[53216] 00:32:38.075 bw ( KiB/s): min=25138, max=26880, per=32.90%, avg=26024.90, stdev=403.32, samples=20 00:32:38.075 iops : min= 196, max= 210, avg=203.30, stdev= 3.20, samples=20 00:32:38.075 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:32:38.075 cpu : usr=95.11%, sys=3.93%, ctx=331, majf=0, minf=115 00:32:38.075 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.075 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.075 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:38.075 00:32:38.075 Run status group 0 (all jobs): 00:32:38.075 READ: bw=77.2MiB/s (81.0MB/s), 24.9MiB/s-27.0MiB/s (26.1MB/s-28.3MB/s), io=776MiB (814MB), run=10045-10047msec 00:32:38.075 09:23:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:38.075 09:23:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:38.075 09:23:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:38.075 09:23:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:38.075 09:23:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.076 00:32:38.076 real 0m11.177s 00:32:38.076 user 0m29.816s 00:32:38.076 sys 0m1.559s 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:38.076 09:23:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:38.076 ************************************ 00:32:38.076 END TEST fio_dif_digest 00:32:38.076 ************************************ 00:32:38.076 09:23:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:38.076 09:23:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.076 rmmod nvme_tcp 00:32:38.076 rmmod nvme_fabrics 00:32:38.076 rmmod nvme_keyring 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3503194 ']' 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3503194 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 3503194 ']' 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 3503194 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3503194 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3503194' 00:32:38.076 killing process with pid 3503194 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@971 -- # kill 3503194 00:32:38.076 09:23:13 nvmf_dif -- common/autotest_common.sh@976 -- # wait 3503194 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:38.076 09:23:13 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:38.076 Waiting for block devices as requested 00:32:38.076 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:38.076 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:38.335 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:38.335 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:38.335 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:38.593 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:38.593 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:38.593 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:38.593 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:38.851 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:38.851 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:39.109 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:39.109 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:39.109 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:39.109 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:39.366 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:39.366 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:39.366 09:23:16 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:39.366 09:23:16 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:39.366 09:23:16 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:39.366 09:23:16 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:32:39.366 09:23:16 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:39.366 09:23:16 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:32:39.366 09:23:16 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.366 09:23:16 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:39.366 09:23:16 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.366 09:23:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:39.366 09:23:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.896 09:23:18 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.896 00:32:41.896 real 1m8.055s 00:32:41.896 user 6m34.315s 00:32:41.896 sys 0m16.694s 00:32:41.896 09:23:18 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:41.896 09:23:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:41.896 ************************************ 00:32:41.896 END TEST nvmf_dif 00:32:41.896 ************************************ 00:32:41.896 09:23:18 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:41.896 09:23:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:41.896 09:23:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:41.896 09:23:18 -- common/autotest_common.sh@10 -- # set +x 00:32:41.896 ************************************ 00:32:41.896 START TEST nvmf_abort_qd_sizes 00:32:41.896 ************************************ 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:41.896 * Looking for test storage... 00:32:41.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.896 --rc genhtml_branch_coverage=1 00:32:41.896 --rc genhtml_function_coverage=1 00:32:41.896 --rc genhtml_legend=1 00:32:41.896 --rc geninfo_all_blocks=1 00:32:41.896 --rc geninfo_unexecuted_blocks=1 00:32:41.896 00:32:41.896 ' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.896 --rc genhtml_branch_coverage=1 00:32:41.896 --rc genhtml_function_coverage=1 00:32:41.896 --rc genhtml_legend=1 00:32:41.896 --rc geninfo_all_blocks=1 00:32:41.896 --rc geninfo_unexecuted_blocks=1 00:32:41.896 00:32:41.896 ' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.896 --rc genhtml_branch_coverage=1 00:32:41.896 --rc genhtml_function_coverage=1 00:32:41.896 --rc genhtml_legend=1 00:32:41.896 --rc geninfo_all_blocks=1 00:32:41.896 --rc geninfo_unexecuted_blocks=1 00:32:41.896 00:32:41.896 ' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:41.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.896 --rc genhtml_branch_coverage=1 00:32:41.896 --rc genhtml_function_coverage=1 00:32:41.896 --rc genhtml_legend=1 00:32:41.896 --rc geninfo_all_blocks=1 00:32:41.896 --rc geninfo_unexecuted_blocks=1 00:32:41.896 00:32:41.896 ' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:41.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.896 09:23:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.897 09:23:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:43.797 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:43.797 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:43.797 Found net devices under 0000:09:00.0: cvl_0_0 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:43.797 Found net devices under 0000:09:00.1: cvl_0_1 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.797 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:43.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:32:43.798 00:32:43.798 --- 10.0.0.2 ping statistics --- 00:32:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.798 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:32:43.798 00:32:43.798 --- 10.0.0.1 ping statistics --- 00:32:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.798 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:43.798 09:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:45.173 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:45.173 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:45.173 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:45.173 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:45.173 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:45.173 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:45.173 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:45.173 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:45.173 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:45.173 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:45.173 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:45.173 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:45.173 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:45.173 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:45.173 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:45.173 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:46.106 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:46.106 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.106 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.106 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.106 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.106 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.106 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3514408 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3514408 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 3514408 ']' 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:46.364 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.364 [2024-11-20 09:23:23.190165] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:32:46.364 [2024-11-20 09:23:23.190245] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.364 [2024-11-20 09:23:23.261089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:46.364 [2024-11-20 09:23:23.321939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.364 [2024-11-20 09:23:23.321983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.364 [2024-11-20 09:23:23.322011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.364 [2024-11-20 09:23:23.322023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.364 [2024-11-20 09:23:23.322034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.364 [2024-11-20 09:23:23.323560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.364 [2024-11-20 09:23:23.323632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.364 [2024-11-20 09:23:23.323757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:46.364 [2024-11-20 09:23:23.323760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:46.622 09:23:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:46.622 ************************************ 00:32:46.622 START TEST spdk_target_abort 00:32:46.622 ************************************ 00:32:46.622 09:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:32:46.622 09:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:46.622 09:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:32:46.622 09:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.622 09:23:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.900 spdk_targetn1 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.900 [2024-11-20 09:23:26.349188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:49.900 [2024-11-20 09:23:26.394637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:49.900 09:23:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:53.176 Initializing NVMe Controllers 00:32:53.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:53.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:53.176 Initialization complete. Launching workers. 00:32:53.176 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13335, failed: 0 00:32:53.176 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1288, failed to submit 12047 00:32:53.176 success 766, unsuccessful 522, failed 0 00:32:53.176 09:23:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:53.176 09:23:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:56.452 Initializing NVMe Controllers 00:32:56.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:56.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:56.452 Initialization complete. Launching workers. 00:32:56.452 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8946, failed: 0 00:32:56.452 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1216, failed to submit 7730 00:32:56.452 success 327, unsuccessful 889, failed 0 00:32:56.452 09:23:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:56.452 09:23:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:59.728 Initializing NVMe Controllers 00:32:59.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:59.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:59.728 Initialization complete. Launching workers. 00:32:59.728 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30888, failed: 0 00:32:59.728 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2729, failed to submit 28159 00:32:59.728 success 508, unsuccessful 2221, failed 0 00:32:59.728 09:23:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:59.728 09:23:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.728 09:23:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:59.728 09:23:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.728 09:23:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:59.728 09:23:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.728 09:23:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3514408 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 3514408 ']' 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 3514408 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3514408 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3514408' 00:33:00.660 killing process with pid 3514408 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 3514408 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 3514408 00:33:00.660 00:33:00.660 real 0m14.122s 00:33:00.660 user 0m53.436s 00:33:00.660 sys 0m2.718s 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:00.660 ************************************ 00:33:00.660 END TEST spdk_target_abort 00:33:00.660 ************************************ 00:33:00.660 09:23:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:00.660 09:23:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:00.660 09:23:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:00.660 09:23:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:00.660 ************************************ 00:33:00.660 START TEST kernel_target_abort 00:33:00.660 ************************************ 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:00.660 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:00.919 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:00.919 09:23:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:01.853 Waiting for block devices as requested 00:33:01.853 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:02.111 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:02.111 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:02.374 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:02.374 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:02.374 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:02.374 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:02.632 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:02.632 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:02.632 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:02.891 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:02.891 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:02.891 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:02.891 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:03.149 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:03.149 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:03.149 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:03.407 No valid GPT data, bailing 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:03.407 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:33:03.407 00:33:03.407 Discovery Log Number of Records 2, Generation counter 2 00:33:03.407 =====Discovery Log Entry 0====== 00:33:03.407 trtype: tcp 00:33:03.407 adrfam: ipv4 00:33:03.407 subtype: current discovery subsystem 00:33:03.407 treq: not specified, sq flow control disable supported 00:33:03.407 portid: 1 00:33:03.407 trsvcid: 4420 00:33:03.407 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:03.407 traddr: 10.0.0.1 00:33:03.407 eflags: none 00:33:03.407 sectype: none 00:33:03.407 =====Discovery Log Entry 1====== 00:33:03.407 trtype: tcp 00:33:03.407 adrfam: ipv4 00:33:03.407 subtype: nvme subsystem 00:33:03.408 treq: not specified, sq flow control disable supported 00:33:03.408 portid: 1 00:33:03.408 trsvcid: 4420 00:33:03.408 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:03.408 traddr: 10.0.0.1 00:33:03.408 eflags: none 00:33:03.408 sectype: none 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:03.408 09:23:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:06.683 Initializing NVMe Controllers 00:33:06.683 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:06.683 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:06.683 Initialization complete. Launching workers. 00:33:06.683 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48875, failed: 0 00:33:06.683 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48875, failed to submit 0 00:33:06.684 success 0, unsuccessful 48875, failed 0 00:33:06.684 09:23:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:06.684 09:23:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:09.961 Initializing NVMe Controllers 00:33:09.961 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:09.961 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:09.961 Initialization complete. Launching workers. 00:33:09.961 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95724, failed: 0 00:33:09.961 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21714, failed to submit 74010 00:33:09.961 success 0, unsuccessful 21714, failed 0 00:33:09.961 09:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:09.961 09:23:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:13.237 Initializing NVMe Controllers 00:33:13.237 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:13.237 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:13.237 Initialization complete. Launching workers. 00:33:13.238 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 89239, failed: 0 00:33:13.238 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22286, failed to submit 66953 00:33:13.238 success 0, unsuccessful 22286, failed 0 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:13.238 09:23:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:14.173 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:14.173 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:14.173 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:14.173 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:14.173 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:14.173 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:14.173 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:14.173 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:14.173 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:14.173 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:14.173 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:14.173 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:14.173 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:14.173 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:14.173 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:14.173 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:15.114 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:15.421 00:33:15.421 real 0m14.550s 00:33:15.421 user 0m6.194s 00:33:15.421 sys 0m3.580s 00:33:15.421 09:23:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:15.421 09:23:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:15.421 ************************************ 00:33:15.421 END TEST kernel_target_abort 00:33:15.421 ************************************ 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.421 rmmod nvme_tcp 00:33:15.421 rmmod nvme_fabrics 00:33:15.421 rmmod nvme_keyring 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3514408 ']' 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3514408 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 3514408 ']' 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 3514408 00:33:15.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3514408) - No such process 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 3514408 is not found' 00:33:15.421 Process with pid 3514408 is not found 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:15.421 09:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:16.815 Waiting for block devices as requested 00:33:16.815 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:16.815 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:16.815 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:16.815 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:17.074 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:17.074 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:17.074 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:17.074 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:17.331 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:17.331 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:17.331 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:17.590 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:17.590 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:17.590 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:17.590 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:17.850 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:17.850 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:17.850 09:23:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.380 09:23:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:20.380 00:33:20.380 real 0m38.390s 00:33:20.380 user 1m1.886s 00:33:20.380 sys 0m9.856s 00:33:20.380 09:23:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:20.380 09:23:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:20.380 ************************************ 00:33:20.380 END TEST nvmf_abort_qd_sizes 00:33:20.380 ************************************ 00:33:20.380 09:23:56 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:20.380 09:23:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:20.380 09:23:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:20.380 09:23:56 -- common/autotest_common.sh@10 -- # set +x 00:33:20.380 ************************************ 00:33:20.380 START TEST keyring_file 00:33:20.380 ************************************ 00:33:20.380 09:23:56 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:20.380 * Looking for test storage... 00:33:20.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:20.380 09:23:56 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:20.380 09:23:56 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:33:20.380 09:23:56 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:20.380 09:23:57 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:20.380 09:23:57 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:20.380 09:23:57 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:20.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.380 --rc genhtml_branch_coverage=1 00:33:20.380 --rc genhtml_function_coverage=1 00:33:20.380 --rc genhtml_legend=1 00:33:20.380 --rc geninfo_all_blocks=1 00:33:20.380 --rc geninfo_unexecuted_blocks=1 00:33:20.380 00:33:20.380 ' 00:33:20.380 09:23:57 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:20.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.380 --rc genhtml_branch_coverage=1 00:33:20.380 --rc genhtml_function_coverage=1 00:33:20.380 --rc genhtml_legend=1 00:33:20.380 --rc geninfo_all_blocks=1 00:33:20.380 --rc geninfo_unexecuted_blocks=1 00:33:20.380 00:33:20.380 ' 00:33:20.380 09:23:57 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:20.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.380 --rc genhtml_branch_coverage=1 00:33:20.380 --rc genhtml_function_coverage=1 00:33:20.380 --rc genhtml_legend=1 00:33:20.380 --rc geninfo_all_blocks=1 00:33:20.380 --rc geninfo_unexecuted_blocks=1 00:33:20.380 00:33:20.380 ' 00:33:20.380 09:23:57 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:20.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.380 --rc genhtml_branch_coverage=1 00:33:20.380 --rc genhtml_function_coverage=1 00:33:20.380 --rc genhtml_legend=1 00:33:20.380 --rc geninfo_all_blocks=1 00:33:20.380 --rc geninfo_unexecuted_blocks=1 00:33:20.380 00:33:20.380 ' 00:33:20.380 09:23:57 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:20.380 09:23:57 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.380 09:23:57 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.380 09:23:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.380 09:23:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.380 09:23:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.381 09:23:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.381 09:23:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:20.381 09:23:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:20.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wEeZOvF6jG 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wEeZOvF6jG 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wEeZOvF6jG 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wEeZOvF6jG 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qg3fIS6apZ 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:20.381 09:23:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qg3fIS6apZ 00:33:20.381 09:23:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qg3fIS6apZ 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.qg3fIS6apZ 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=3520192 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:20.381 09:23:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3520192 00:33:20.381 09:23:57 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3520192 ']' 00:33:20.381 09:23:57 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.381 09:23:57 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:20.381 09:23:57 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.381 09:23:57 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:20.381 09:23:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:20.381 [2024-11-20 09:23:57.219599] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:33:20.381 [2024-11-20 09:23:57.219698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520192 ] 00:33:20.381 [2024-11-20 09:23:57.286482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.381 [2024-11-20 09:23:57.347024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.643 09:23:57 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:20.643 09:23:57 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:33:20.643 09:23:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:20.643 09:23:57 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.643 09:23:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:20.643 [2024-11-20 09:23:57.636365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.643 null0 00:33:20.643 [2024-11-20 09:23:57.668432] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:20.644 [2024-11-20 09:23:57.668938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:20.902 09:23:57 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.902 09:23:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:20.902 09:23:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:20.902 09:23:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:20.902 09:23:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:20.903 [2024-11-20 09:23:57.692473] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:20.903 request: 00:33:20.903 { 00:33:20.903 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.903 "secure_channel": false, 00:33:20.903 "listen_address": { 00:33:20.903 "trtype": "tcp", 00:33:20.903 "traddr": "127.0.0.1", 00:33:20.903 "trsvcid": "4420" 00:33:20.903 }, 00:33:20.903 "method": "nvmf_subsystem_add_listener", 00:33:20.903 "req_id": 1 00:33:20.903 } 00:33:20.903 Got JSON-RPC error response 00:33:20.903 response: 00:33:20.903 { 00:33:20.903 "code": -32602, 00:33:20.903 "message": "Invalid parameters" 00:33:20.903 } 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:20.903 09:23:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=3520196 00:33:20.903 09:23:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:20.903 09:23:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3520196 /var/tmp/bperf.sock 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3520196 ']' 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:20.903 09:23:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:20.903 [2024-11-20 09:23:57.741978] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:33:20.903 [2024-11-20 09:23:57.742053] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520196 ] 00:33:20.903 [2024-11-20 09:23:57.809700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.903 [2024-11-20 09:23:57.873519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.161 09:23:57 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:21.161 09:23:57 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:33:21.161 09:23:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wEeZOvF6jG 00:33:21.161 09:23:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wEeZOvF6jG 00:33:21.418 09:23:58 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qg3fIS6apZ 00:33:21.418 09:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qg3fIS6apZ 00:33:21.676 09:23:58 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:33:21.676 09:23:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:21.676 09:23:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.676 09:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.676 09:23:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:21.933 09:23:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.wEeZOvF6jG == \/\t\m\p\/\t\m\p\.\w\E\e\Z\O\v\F\6\j\G ]] 00:33:21.933 09:23:58 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:33:21.933 09:23:58 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:33:21.933 09:23:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.933 09:23:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.933 09:23:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:22.192 09:23:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.qg3fIS6apZ == \/\t\m\p\/\t\m\p\.\q\g\3\f\I\S\6\a\p\Z ]] 00:33:22.192 09:23:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:33:22.192 09:23:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:22.192 09:23:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:22.192 09:23:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.192 09:23:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:22.192 09:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.450 09:23:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:22.450 09:23:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:33:22.450 09:23:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:22.450 09:23:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:22.450 09:23:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.450 09:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.450 09:23:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:22.707 09:23:59 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:33:22.707 09:23:59 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.708 09:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.965 [2024-11-20 09:23:59.879605] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:22.965 nvme0n1 00:33:22.965 09:23:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:33:22.965 09:23:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:22.965 09:23:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:22.965 09:23:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.965 09:23:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:22.965 09:23:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.531 09:24:00 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:33:23.531 09:24:00 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:33:23.531 09:24:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:23.531 09:24:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:23.531 09:24:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.531 09:24:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.531 09:24:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:23.531 09:24:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:33:23.531 09:24:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.790 Running I/O for 1 seconds... 00:33:24.723 9939.00 IOPS, 38.82 MiB/s 00:33:24.723 Latency(us) 00:33:24.723 [2024-11-20T08:24:01.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.723 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:24.723 nvme0n1 : 1.01 9990.25 39.02 0.00 0.00 12772.16 7184.69 21748.24 00:33:24.723 [2024-11-20T08:24:01.752Z] =================================================================================================================== 00:33:24.723 [2024-11-20T08:24:01.752Z] Total : 9990.25 39.02 0.00 0.00 12772.16 7184.69 21748.24 00:33:24.723 { 00:33:24.723 "results": [ 00:33:24.723 { 00:33:24.723 "job": "nvme0n1", 00:33:24.723 "core_mask": "0x2", 00:33:24.723 "workload": "randrw", 00:33:24.723 "percentage": 50, 00:33:24.723 "status": "finished", 00:33:24.723 "queue_depth": 128, 00:33:24.723 "io_size": 4096, 00:33:24.723 "runtime": 1.007682, 00:33:24.723 "iops": 9990.254862148971, 00:33:24.723 "mibps": 39.02443305526942, 00:33:24.723 "io_failed": 0, 00:33:24.723 "io_timeout": 0, 00:33:24.723 "avg_latency_us": 12772.155280215153, 00:33:24.723 "min_latency_us": 7184.687407407408, 00:33:24.723 "max_latency_us": 21748.242962962962 00:33:24.723 } 00:33:24.723 ], 00:33:24.723 "core_count": 1 00:33:24.723 } 00:33:24.723 09:24:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:24.723 09:24:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:24.980 09:24:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:24.980 09:24:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:24.980 09:24:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:24.980 09:24:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:24.980 09:24:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:24.980 09:24:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:25.238 09:24:02 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:25.238 09:24:02 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:25.238 09:24:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:25.238 09:24:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:25.238 09:24:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:25.238 09:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:25.238 09:24:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:25.804 09:24:02 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:25.804 09:24:02 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:25.804 09:24:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:25.804 09:24:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:25.804 09:24:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:25.804 09:24:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.804 09:24:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:25.804 09:24:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:25.804 09:24:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:25.804 09:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:25.804 [2024-11-20 09:24:02.809267] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:25.804 [2024-11-20 09:24:02.809937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b9510 (107): Transport endpoint is not connected 00:33:25.804 [2024-11-20 09:24:02.810913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b9510 (9): Bad file descriptor 00:33:25.804 [2024-11-20 09:24:02.811913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:25.804 [2024-11-20 09:24:02.811939] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:25.804 [2024-11-20 09:24:02.811968] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:25.804 [2024-11-20 09:24:02.811982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:25.804 request: 00:33:25.804 { 00:33:25.804 "name": "nvme0", 00:33:25.805 "trtype": "tcp", 00:33:25.805 "traddr": "127.0.0.1", 00:33:25.805 "adrfam": "ipv4", 00:33:25.805 "trsvcid": "4420", 00:33:25.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:25.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:25.805 "prchk_reftag": false, 00:33:25.805 "prchk_guard": false, 00:33:25.805 "hdgst": false, 00:33:25.805 "ddgst": false, 00:33:25.805 "psk": "key1", 00:33:25.805 "allow_unrecognized_csi": false, 00:33:25.805 "method": "bdev_nvme_attach_controller", 00:33:25.805 "req_id": 1 00:33:25.805 } 00:33:25.805 Got JSON-RPC error response 00:33:25.805 response: 00:33:25.805 { 00:33:25.805 "code": -5, 00:33:25.805 "message": "Input/output error" 00:33:25.805 } 00:33:25.805 09:24:02 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:25.805 09:24:02 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:25.805 09:24:02 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:25.805 09:24:02 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:25.805 09:24:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:25.805 09:24:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:26.062 09:24:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:26.062 09:24:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:26.062 09:24:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.062 09:24:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:26.320 09:24:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:26.320 09:24:03 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:26.320 09:24:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:26.320 09:24:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:26.320 09:24:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:26.320 09:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.320 09:24:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:26.578 09:24:03 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:26.578 09:24:03 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:26.578 09:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:26.836 09:24:03 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:26.836 09:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:27.093 09:24:03 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:27.093 09:24:03 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:27.093 09:24:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.350 09:24:04 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:27.350 09:24:04 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.wEeZOvF6jG 00:33:27.350 09:24:04 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wEeZOvF6jG 00:33:27.350 09:24:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:27.350 09:24:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wEeZOvF6jG 00:33:27.350 09:24:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:27.350 09:24:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:27.350 09:24:04 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:27.350 09:24:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:27.350 09:24:04 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wEeZOvF6jG 00:33:27.351 09:24:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wEeZOvF6jG 00:33:27.608 [2024-11-20 09:24:04.494062] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wEeZOvF6jG': 0100660 00:33:27.608 [2024-11-20 09:24:04.494095] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:27.608 request: 00:33:27.608 { 00:33:27.608 "name": "key0", 00:33:27.608 "path": "/tmp/tmp.wEeZOvF6jG", 00:33:27.608 "method": "keyring_file_add_key", 00:33:27.608 "req_id": 1 00:33:27.608 } 00:33:27.608 Got JSON-RPC error response 00:33:27.608 response: 00:33:27.608 { 00:33:27.608 "code": -1, 00:33:27.608 "message": "Operation not permitted" 00:33:27.608 } 00:33:27.608 09:24:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:27.608 09:24:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:27.608 09:24:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:27.608 09:24:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:27.608 09:24:04 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.wEeZOvF6jG 00:33:27.608 09:24:04 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wEeZOvF6jG 00:33:27.608 09:24:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wEeZOvF6jG 00:33:27.866 09:24:04 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.wEeZOvF6jG 00:33:27.866 09:24:04 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:27.866 09:24:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:27.866 09:24:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:27.866 09:24:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:27.866 09:24:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:27.866 09:24:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:28.124 09:24:05 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:28.124 09:24:05 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:28.124 09:24:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:28.124 09:24:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:28.124 09:24:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:28.124 09:24:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:28.124 09:24:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:28.124 09:24:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:28.124 09:24:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:28.124 09:24:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:28.383 [2024-11-20 09:24:05.352396] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wEeZOvF6jG': No such file or directory 00:33:28.383 [2024-11-20 09:24:05.352429] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:28.383 [2024-11-20 09:24:05.352467] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:28.383 [2024-11-20 09:24:05.352481] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:28.383 [2024-11-20 09:24:05.352494] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:28.383 [2024-11-20 09:24:05.352513] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:28.383 request: 00:33:28.383 { 00:33:28.383 "name": "nvme0", 00:33:28.383 "trtype": "tcp", 00:33:28.383 "traddr": "127.0.0.1", 00:33:28.383 "adrfam": "ipv4", 00:33:28.383 "trsvcid": "4420", 00:33:28.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:28.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:28.383 "prchk_reftag": false, 00:33:28.383 "prchk_guard": false, 00:33:28.383 "hdgst": false, 00:33:28.383 "ddgst": false, 00:33:28.383 "psk": "key0", 00:33:28.383 "allow_unrecognized_csi": false, 00:33:28.383 "method": "bdev_nvme_attach_controller", 00:33:28.383 "req_id": 1 00:33:28.383 } 00:33:28.383 Got JSON-RPC error response 00:33:28.383 response: 00:33:28.383 { 00:33:28.383 "code": -19, 00:33:28.383 "message": "No such device" 00:33:28.383 } 00:33:28.383 09:24:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:28.383 09:24:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:28.383 09:24:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:28.383 09:24:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:28.383 09:24:05 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:28.383 09:24:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:28.641 09:24:05 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:28.641 09:24:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:28.641 09:24:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:28.641 09:24:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:28.641 09:24:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:28.641 09:24:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:28.641 09:24:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZIwP6mJKiX 00:33:28.641 09:24:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:28.641 09:24:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:28.641 09:24:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:33:28.641 09:24:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:28.641 09:24:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:28.641 09:24:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:33:28.641 09:24:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:33:28.899 09:24:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZIwP6mJKiX 00:33:28.899 09:24:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZIwP6mJKiX 00:33:28.899 09:24:05 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.ZIwP6mJKiX 00:33:28.899 09:24:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZIwP6mJKiX 00:33:28.899 09:24:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZIwP6mJKiX 00:33:29.157 09:24:05 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:29.157 09:24:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:29.415 nvme0n1 00:33:29.415 09:24:06 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:29.415 09:24:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:29.415 09:24:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:29.415 09:24:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.415 09:24:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.415 09:24:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:29.673 09:24:06 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:29.673 09:24:06 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:29.673 09:24:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:29.931 09:24:06 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:29.931 09:24:06 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:29.931 09:24:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.931 09:24:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.931 09:24:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:30.189 09:24:07 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:30.189 09:24:07 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:30.189 09:24:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:30.189 09:24:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.189 09:24:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.189 09:24:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.189 09:24:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:30.753 09:24:07 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:30.753 09:24:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:30.753 09:24:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:31.011 09:24:07 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:31.011 09:24:07 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:31.011 09:24:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.269 09:24:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:31.269 09:24:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZIwP6mJKiX 00:33:31.269 09:24:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZIwP6mJKiX 00:33:31.527 09:24:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qg3fIS6apZ 00:33:31.527 09:24:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qg3fIS6apZ 00:33:31.784 09:24:08 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:31.784 09:24:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:32.043 nvme0n1 00:33:32.043 09:24:09 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:32.043 09:24:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:32.610 09:24:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:32.610 "subsystems": [ 00:33:32.610 { 00:33:32.610 "subsystem": "keyring", 00:33:32.610 "config": [ 00:33:32.610 { 00:33:32.610 "method": "keyring_file_add_key", 00:33:32.610 "params": { 00:33:32.610 "name": "key0", 00:33:32.610 "path": "/tmp/tmp.ZIwP6mJKiX" 00:33:32.610 } 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "method": "keyring_file_add_key", 00:33:32.610 "params": { 00:33:32.610 "name": "key1", 00:33:32.610 "path": "/tmp/tmp.qg3fIS6apZ" 00:33:32.610 } 00:33:32.610 } 00:33:32.610 ] 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "subsystem": "iobuf", 00:33:32.610 "config": [ 00:33:32.610 { 00:33:32.610 "method": "iobuf_set_options", 00:33:32.610 "params": { 00:33:32.610 "small_pool_count": 8192, 00:33:32.610 "large_pool_count": 1024, 00:33:32.610 "small_bufsize": 8192, 00:33:32.610 "large_bufsize": 135168, 00:33:32.610 "enable_numa": false 00:33:32.610 } 00:33:32.610 } 00:33:32.610 ] 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "subsystem": "sock", 00:33:32.610 "config": [ 00:33:32.610 { 00:33:32.610 "method": "sock_set_default_impl", 00:33:32.610 "params": { 00:33:32.610 "impl_name": "posix" 00:33:32.610 } 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "method": "sock_impl_set_options", 00:33:32.610 "params": { 00:33:32.610 "impl_name": "ssl", 00:33:32.610 "recv_buf_size": 4096, 00:33:32.610 "send_buf_size": 4096, 00:33:32.610 "enable_recv_pipe": true, 00:33:32.610 "enable_quickack": false, 00:33:32.610 "enable_placement_id": 0, 00:33:32.610 "enable_zerocopy_send_server": true, 00:33:32.610 "enable_zerocopy_send_client": false, 00:33:32.610 "zerocopy_threshold": 0, 00:33:32.610 "tls_version": 0, 00:33:32.610 "enable_ktls": false 00:33:32.610 } 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "method": "sock_impl_set_options", 00:33:32.610 "params": { 00:33:32.610 "impl_name": "posix", 00:33:32.610 "recv_buf_size": 2097152, 00:33:32.610 "send_buf_size": 2097152, 00:33:32.610 "enable_recv_pipe": true, 00:33:32.610 "enable_quickack": false, 00:33:32.610 "enable_placement_id": 0, 00:33:32.610 "enable_zerocopy_send_server": true, 00:33:32.610 "enable_zerocopy_send_client": false, 00:33:32.610 "zerocopy_threshold": 0, 00:33:32.610 "tls_version": 0, 00:33:32.610 "enable_ktls": false 00:33:32.610 } 00:33:32.610 } 00:33:32.610 ] 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "subsystem": "vmd", 00:33:32.610 "config": [] 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "subsystem": "accel", 00:33:32.610 "config": [ 00:33:32.610 { 00:33:32.610 "method": "accel_set_options", 00:33:32.610 "params": { 00:33:32.610 "small_cache_size": 128, 00:33:32.610 "large_cache_size": 16, 00:33:32.610 "task_count": 2048, 00:33:32.610 "sequence_count": 2048, 00:33:32.610 "buf_count": 2048 00:33:32.610 } 00:33:32.610 } 00:33:32.610 ] 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "subsystem": "bdev", 00:33:32.610 "config": [ 00:33:32.610 { 00:33:32.610 "method": "bdev_set_options", 00:33:32.610 "params": { 00:33:32.610 "bdev_io_pool_size": 65535, 00:33:32.610 "bdev_io_cache_size": 256, 00:33:32.610 "bdev_auto_examine": true, 00:33:32.610 "iobuf_small_cache_size": 128, 00:33:32.610 "iobuf_large_cache_size": 16 00:33:32.610 } 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "method": "bdev_raid_set_options", 00:33:32.610 "params": { 00:33:32.610 "process_window_size_kb": 1024, 00:33:32.610 "process_max_bandwidth_mb_sec": 0 00:33:32.610 } 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "method": "bdev_iscsi_set_options", 00:33:32.610 "params": { 00:33:32.610 "timeout_sec": 30 00:33:32.610 } 00:33:32.610 }, 00:33:32.610 { 00:33:32.610 "method": "bdev_nvme_set_options", 00:33:32.610 "params": { 00:33:32.610 "action_on_timeout": "none", 00:33:32.610 "timeout_us": 0, 00:33:32.610 "timeout_admin_us": 0, 00:33:32.610 "keep_alive_timeout_ms": 10000, 00:33:32.610 "arbitration_burst": 0, 00:33:32.610 "low_priority_weight": 0, 00:33:32.610 "medium_priority_weight": 0, 00:33:32.610 "high_priority_weight": 0, 00:33:32.610 "nvme_adminq_poll_period_us": 10000, 00:33:32.610 "nvme_ioq_poll_period_us": 0, 00:33:32.610 "io_queue_requests": 512, 00:33:32.610 "delay_cmd_submit": true, 00:33:32.610 "transport_retry_count": 4, 00:33:32.610 "bdev_retry_count": 3, 00:33:32.610 "transport_ack_timeout": 0, 00:33:32.610 "ctrlr_loss_timeout_sec": 0, 00:33:32.610 "reconnect_delay_sec": 0, 00:33:32.610 "fast_io_fail_timeout_sec": 0, 00:33:32.610 "disable_auto_failback": false, 00:33:32.610 "generate_uuids": false, 00:33:32.610 "transport_tos": 0, 00:33:32.610 "nvme_error_stat": false, 00:33:32.610 "rdma_srq_size": 0, 00:33:32.610 "io_path_stat": false, 00:33:32.610 "allow_accel_sequence": false, 00:33:32.610 "rdma_max_cq_size": 0, 00:33:32.610 "rdma_cm_event_timeout_ms": 0, 00:33:32.610 "dhchap_digests": [ 00:33:32.610 "sha256", 00:33:32.610 "sha384", 00:33:32.611 "sha512" 00:33:32.611 ], 00:33:32.611 "dhchap_dhgroups": [ 00:33:32.611 "null", 00:33:32.611 "ffdhe2048", 00:33:32.611 "ffdhe3072", 00:33:32.611 "ffdhe4096", 00:33:32.611 "ffdhe6144", 00:33:32.611 "ffdhe8192" 00:33:32.611 ] 00:33:32.611 } 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "method": "bdev_nvme_attach_controller", 00:33:32.611 "params": { 00:33:32.611 "name": "nvme0", 00:33:32.611 "trtype": "TCP", 00:33:32.611 "adrfam": "IPv4", 00:33:32.611 "traddr": "127.0.0.1", 00:33:32.611 "trsvcid": "4420", 00:33:32.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.611 "prchk_reftag": false, 00:33:32.611 "prchk_guard": false, 00:33:32.611 "ctrlr_loss_timeout_sec": 0, 00:33:32.611 "reconnect_delay_sec": 0, 00:33:32.611 "fast_io_fail_timeout_sec": 0, 00:33:32.611 "psk": "key0", 00:33:32.611 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.611 "hdgst": false, 00:33:32.611 "ddgst": false, 00:33:32.611 "multipath": "multipath" 00:33:32.611 } 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "method": "bdev_nvme_set_hotplug", 00:33:32.611 "params": { 00:33:32.611 "period_us": 100000, 00:33:32.611 "enable": false 00:33:32.611 } 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "method": "bdev_wait_for_examine" 00:33:32.611 } 00:33:32.611 ] 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "subsystem": "nbd", 00:33:32.611 "config": [] 00:33:32.611 } 00:33:32.611 ] 00:33:32.611 }' 00:33:32.611 09:24:09 keyring_file -- keyring/file.sh@115 -- # killprocess 3520196 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3520196 ']' 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3520196 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@957 -- # uname 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3520196 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3520196' 00:33:32.611 killing process with pid 3520196 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@971 -- # kill 3520196 00:33:32.611 Received shutdown signal, test time was about 1.000000 seconds 00:33:32.611 00:33:32.611 Latency(us) 00:33:32.611 [2024-11-20T08:24:09.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.611 [2024-11-20T08:24:09.640Z] =================================================================================================================== 00:33:32.611 [2024-11-20T08:24:09.640Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@976 -- # wait 3520196 00:33:32.611 09:24:09 keyring_file -- keyring/file.sh@118 -- # bperfpid=3522407 00:33:32.611 09:24:09 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3522407 /var/tmp/bperf.sock 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3522407 ']' 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:32.611 09:24:09 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.611 09:24:09 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:32.611 09:24:09 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:32.611 "subsystems": [ 00:33:32.611 { 00:33:32.611 "subsystem": "keyring", 00:33:32.611 "config": [ 00:33:32.611 { 00:33:32.611 "method": "keyring_file_add_key", 00:33:32.611 "params": { 00:33:32.611 "name": "key0", 00:33:32.611 "path": "/tmp/tmp.ZIwP6mJKiX" 00:33:32.611 } 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "method": "keyring_file_add_key", 00:33:32.611 "params": { 00:33:32.611 "name": "key1", 00:33:32.611 "path": "/tmp/tmp.qg3fIS6apZ" 00:33:32.611 } 00:33:32.611 } 00:33:32.611 ] 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "subsystem": "iobuf", 00:33:32.611 "config": [ 00:33:32.611 { 00:33:32.611 "method": "iobuf_set_options", 00:33:32.611 "params": { 00:33:32.611 "small_pool_count": 8192, 00:33:32.611 "large_pool_count": 1024, 00:33:32.611 "small_bufsize": 8192, 00:33:32.611 "large_bufsize": 135168, 00:33:32.611 "enable_numa": false 00:33:32.611 } 00:33:32.611 } 00:33:32.611 ] 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "subsystem": "sock", 00:33:32.611 "config": [ 00:33:32.611 { 00:33:32.611 "method": "sock_set_default_impl", 00:33:32.611 "params": { 00:33:32.611 "impl_name": "posix" 00:33:32.611 } 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "method": "sock_impl_set_options", 00:33:32.611 "params": { 00:33:32.611 "impl_name": "ssl", 00:33:32.611 "recv_buf_size": 4096, 00:33:32.611 "send_buf_size": 4096, 00:33:32.611 "enable_recv_pipe": true, 00:33:32.611 "enable_quickack": false, 00:33:32.611 "enable_placement_id": 0, 00:33:32.611 "enable_zerocopy_send_server": true, 00:33:32.611 "enable_zerocopy_send_client": false, 00:33:32.611 "zerocopy_threshold": 0, 00:33:32.611 "tls_version": 0, 00:33:32.611 "enable_ktls": false 00:33:32.611 } 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "method": "sock_impl_set_options", 00:33:32.611 "params": { 00:33:32.611 "impl_name": "posix", 00:33:32.611 "recv_buf_size": 2097152, 00:33:32.611 "send_buf_size": 2097152, 00:33:32.611 "enable_recv_pipe": true, 00:33:32.611 "enable_quickack": false, 00:33:32.611 "enable_placement_id": 0, 00:33:32.611 "enable_zerocopy_send_server": true, 00:33:32.611 "enable_zerocopy_send_client": false, 00:33:32.611 "zerocopy_threshold": 0, 00:33:32.611 "tls_version": 0, 00:33:32.611 "enable_ktls": false 00:33:32.611 } 00:33:32.611 } 00:33:32.611 ] 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "subsystem": "vmd", 00:33:32.611 "config": [] 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "subsystem": "accel", 00:33:32.611 "config": [ 00:33:32.611 { 00:33:32.611 "method": "accel_set_options", 00:33:32.611 "params": { 00:33:32.611 "small_cache_size": 128, 00:33:32.611 "large_cache_size": 16, 00:33:32.611 "task_count": 2048, 00:33:32.611 "sequence_count": 2048, 00:33:32.611 "buf_count": 2048 00:33:32.611 } 00:33:32.611 } 00:33:32.611 ] 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "subsystem": "bdev", 00:33:32.611 "config": [ 00:33:32.611 { 00:33:32.611 "method": "bdev_set_options", 00:33:32.611 "params": { 00:33:32.611 "bdev_io_pool_size": 65535, 00:33:32.611 "bdev_io_cache_size": 256, 00:33:32.611 "bdev_auto_examine": true, 00:33:32.611 "iobuf_small_cache_size": 128, 00:33:32.611 "iobuf_large_cache_size": 16 00:33:32.611 } 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "method": "bdev_raid_set_options", 00:33:32.611 "params": { 00:33:32.611 "process_window_size_kb": 1024, 00:33:32.611 "process_max_bandwidth_mb_sec": 0 00:33:32.611 } 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "method": "bdev_iscsi_set_options", 00:33:32.611 "params": { 00:33:32.611 "timeout_sec": 30 00:33:32.611 } 00:33:32.611 }, 00:33:32.611 { 00:33:32.611 "method": "bdev_nvme_set_options", 00:33:32.611 "params": { 00:33:32.611 "action_on_timeout": "none", 00:33:32.611 "timeout_us": 0, 00:33:32.611 "timeout_admin_us": 0, 00:33:32.611 "keep_alive_timeout_ms": 10000, 00:33:32.611 "arbitration_burst": 0, 00:33:32.611 "low_priority_weight": 0, 00:33:32.611 "medium_priority_weight": 0, 00:33:32.611 "high_priority_weight": 0, 00:33:32.612 "nvme_adminq_poll_period_us": 10000, 00:33:32.612 "nvme_ioq_poll_period_us": 0, 00:33:32.612 "io_queue_requests": 512, 00:33:32.612 "delay_cmd_submit": true, 00:33:32.612 "transport_retry_count": 4, 00:33:32.612 "bdev_retry_count": 3, 00:33:32.612 "transport_ack_timeout": 0, 00:33:32.612 "ctrlr_loss_timeout_sec": 0, 00:33:32.612 "reconnect_delay_sec": 0, 00:33:32.612 "fast_io_fail_timeout_sec": 0, 00:33:32.612 "disable_auto_failback": false, 00:33:32.612 "generate_uuids": false, 00:33:32.612 "transport_tos": 0, 00:33:32.612 "nvme_error_stat": false, 00:33:32.612 "rdma_srq_size": 0, 00:33:32.612 "io_path_stat": false, 00:33:32.612 "allow_accel_sequence": false, 00:33:32.612 "rdma_max_cq_size": 0, 00:33:32.612 "rdma_cm_event_timeout_ms": 0, 00:33:32.612 "dhchap_digests": [ 00:33:32.612 "sha256", 00:33:32.612 "sha384", 00:33:32.612 "sha512" 00:33:32.612 ], 00:33:32.612 "dhchap_dhgroups": [ 00:33:32.612 "null", 00:33:32.612 "ffdhe2048", 00:33:32.612 "ffdhe3072", 00:33:32.612 "ffdhe4096", 00:33:32.612 "ffdhe6144", 00:33:32.612 "ffdhe8192" 00:33:32.612 ] 00:33:32.612 } 00:33:32.612 }, 00:33:32.612 { 00:33:32.612 "method": "bdev_nvme_attach_controller", 00:33:32.612 "params": { 00:33:32.612 "name": "nvme0", 00:33:32.612 "trtype": "TCP", 00:33:32.612 "adrfam": "IPv4", 00:33:32.612 "traddr": "127.0.0.1", 00:33:32.612 "trsvcid": "4420", 00:33:32.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.612 "prchk_reftag": false, 00:33:32.612 "prchk_guard": false, 00:33:32.612 "ctrlr_loss_timeout_sec": 0, 00:33:32.612 "reconnect_delay_sec": 0, 00:33:32.612 "fast_io_fail_timeout_sec": 0, 00:33:32.612 "psk": "key0", 00:33:32.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.612 "hdgst": false, 00:33:32.612 "ddgst": false, 00:33:32.612 "multipath": "multipath" 00:33:32.612 } 00:33:32.612 }, 00:33:32.612 { 00:33:32.612 "method": "bdev_nvme_set_hotplug", 00:33:32.612 "params": { 00:33:32.612 "period_us": 100000, 00:33:32.612 "enable": false 00:33:32.612 } 00:33:32.612 }, 00:33:32.612 { 00:33:32.612 "method": "bdev_wait_for_examine" 00:33:32.612 } 00:33:32.612 ] 00:33:32.612 }, 00:33:32.612 { 00:33:32.612 "subsystem": "nbd", 00:33:32.612 "config": [] 00:33:32.612 } 00:33:32.612 ] 00:33:32.612 }' 00:33:32.612 09:24:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:32.612 [2024-11-20 09:24:09.623860] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:33:32.612 [2024-11-20 09:24:09.623951] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522407 ] 00:33:32.870 [2024-11-20 09:24:09.693129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.870 [2024-11-20 09:24:09.754562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.128 [2024-11-20 09:24:09.947717] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:33.128 09:24:10 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:33.128 09:24:10 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:33:33.128 09:24:10 keyring_file -- keyring/file.sh@121 -- # jq length 00:33:33.128 09:24:10 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:33:33.128 09:24:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.385 09:24:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:33.385 09:24:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:33:33.385 09:24:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:33.385 09:24:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.385 09:24:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.385 09:24:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.385 09:24:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:33.644 09:24:10 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:33:33.644 09:24:10 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:33:33.644 09:24:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:33.644 09:24:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:33.644 09:24:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:33.644 09:24:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:33.644 09:24:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.902 09:24:10 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:33:33.902 09:24:10 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:33:33.902 09:24:10 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:33:33.902 09:24:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:34.160 09:24:11 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:33:34.160 09:24:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:34.160 09:24:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ZIwP6mJKiX /tmp/tmp.qg3fIS6apZ 00:33:34.160 09:24:11 keyring_file -- keyring/file.sh@20 -- # killprocess 3522407 00:33:34.160 09:24:11 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3522407 ']' 00:33:34.160 09:24:11 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3522407 00:33:34.160 09:24:11 keyring_file -- common/autotest_common.sh@957 -- # uname 00:33:34.417 09:24:11 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:34.417 09:24:11 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3522407 00:33:34.417 09:24:11 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:34.417 09:24:11 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:34.417 09:24:11 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3522407' 00:33:34.417 killing process with pid 3522407 00:33:34.418 09:24:11 keyring_file -- common/autotest_common.sh@971 -- # kill 3522407 00:33:34.418 Received shutdown signal, test time was about 1.000000 seconds 00:33:34.418 00:33:34.418 Latency(us) 00:33:34.418 [2024-11-20T08:24:11.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.418 [2024-11-20T08:24:11.447Z] =================================================================================================================== 00:33:34.418 [2024-11-20T08:24:11.447Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:34.418 09:24:11 keyring_file -- common/autotest_common.sh@976 -- # wait 3522407 00:33:34.675 09:24:11 keyring_file -- keyring/file.sh@21 -- # killprocess 3520192 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3520192 ']' 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3520192 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@957 -- # uname 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3520192 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3520192' 00:33:34.675 killing process with pid 3520192 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@971 -- # kill 3520192 00:33:34.675 09:24:11 keyring_file -- common/autotest_common.sh@976 -- # wait 3520192 00:33:34.933 00:33:34.933 real 0m14.980s 00:33:34.933 user 0m38.197s 00:33:34.933 sys 0m3.294s 00:33:34.933 09:24:11 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:34.933 09:24:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:34.933 ************************************ 00:33:34.933 END TEST keyring_file 00:33:34.933 ************************************ 00:33:34.933 09:24:11 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:33:34.933 09:24:11 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:34.933 09:24:11 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:34.933 09:24:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:34.933 09:24:11 -- common/autotest_common.sh@10 -- # set +x 00:33:34.933 ************************************ 00:33:34.933 START TEST keyring_linux 00:33:34.933 ************************************ 00:33:34.933 09:24:11 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:34.933 Joined session keyring: 851580101 00:33:35.192 * Looking for test storage... 00:33:35.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:35.192 09:24:12 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:35.192 09:24:12 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:33:35.192 09:24:12 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:35.192 09:24:12 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@345 -- # : 1 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@368 -- # return 0 00:33:35.192 09:24:12 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:35.192 09:24:12 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.192 --rc genhtml_branch_coverage=1 00:33:35.192 --rc genhtml_function_coverage=1 00:33:35.192 --rc genhtml_legend=1 00:33:35.192 --rc geninfo_all_blocks=1 00:33:35.192 --rc geninfo_unexecuted_blocks=1 00:33:35.192 00:33:35.192 ' 00:33:35.192 09:24:12 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.192 --rc genhtml_branch_coverage=1 00:33:35.192 --rc genhtml_function_coverage=1 00:33:35.192 --rc genhtml_legend=1 00:33:35.192 --rc geninfo_all_blocks=1 00:33:35.192 --rc geninfo_unexecuted_blocks=1 00:33:35.192 00:33:35.192 ' 00:33:35.192 09:24:12 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.192 --rc genhtml_branch_coverage=1 00:33:35.192 --rc genhtml_function_coverage=1 00:33:35.192 --rc genhtml_legend=1 00:33:35.192 --rc geninfo_all_blocks=1 00:33:35.192 --rc geninfo_unexecuted_blocks=1 00:33:35.192 00:33:35.192 ' 00:33:35.192 09:24:12 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:35.192 --rc genhtml_branch_coverage=1 00:33:35.192 --rc genhtml_function_coverage=1 00:33:35.192 --rc genhtml_legend=1 00:33:35.192 --rc geninfo_all_blocks=1 00:33:35.192 --rc geninfo_unexecuted_blocks=1 00:33:35.192 00:33:35.192 ' 00:33:35.192 09:24:12 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:35.192 09:24:12 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.192 09:24:12 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.192 09:24:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.192 09:24:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.192 09:24:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.192 09:24:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:35.192 09:24:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:35.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:35.192 09:24:12 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:35.192 09:24:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:35.192 09:24:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:35.192 09:24:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:35.192 09:24:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:35.192 09:24:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:35.193 09:24:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:35.193 09:24:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:35.193 /tmp/:spdk-test:key0 00:33:35.193 09:24:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:33:35.193 09:24:12 keyring_linux -- nvmf/common.sh@733 -- # python - 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:35.193 09:24:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:35.193 /tmp/:spdk-test:key1 00:33:35.193 09:24:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3522774 00:33:35.193 09:24:12 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:35.193 09:24:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3522774 00:33:35.193 09:24:12 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3522774 ']' 00:33:35.193 09:24:12 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.193 09:24:12 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:35.193 09:24:12 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.193 09:24:12 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:35.193 09:24:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:35.451 [2024-11-20 09:24:12.253917] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:33:35.451 [2024-11-20 09:24:12.254026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522774 ] 00:33:35.451 [2024-11-20 09:24:12.319930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.451 [2024-11-20 09:24:12.374349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:33:35.709 09:24:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:35.709 [2024-11-20 09:24:12.643685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.709 null0 00:33:35.709 [2024-11-20 09:24:12.675729] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:35.709 [2024-11-20 09:24:12.676203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.709 09:24:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:35.709 97873256 00:33:35.709 09:24:12 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:35.709 914142707 00:33:35.709 09:24:12 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3522840 00:33:35.709 09:24:12 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:35.709 09:24:12 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3522840 /var/tmp/bperf.sock 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3522840 ']' 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:35.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:35.709 09:24:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:35.967 [2024-11-20 09:24:12.743712] Starting SPDK v25.01-pre git sha1 7a5718d02 / DPDK 24.03.0 initialization... 00:33:35.967 [2024-11-20 09:24:12.743787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522840 ] 00:33:35.967 [2024-11-20 09:24:12.811858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.967 [2024-11-20 09:24:12.868497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.967 09:24:12 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:35.967 09:24:12 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:33:35.967 09:24:12 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:35.967 09:24:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:36.533 09:24:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:36.533 09:24:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:36.791 09:24:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:36.791 09:24:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:37.049 [2024-11-20 09:24:13.853156] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:37.049 nvme0n1 00:33:37.049 09:24:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:37.049 09:24:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:37.049 09:24:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:37.049 09:24:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:37.049 09:24:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.049 09:24:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:37.307 09:24:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:37.307 09:24:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:37.307 09:24:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:37.307 09:24:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:37.307 09:24:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:37.307 09:24:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:37.307 09:24:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.564 09:24:14 keyring_linux -- keyring/linux.sh@25 -- # sn=97873256 00:33:37.564 09:24:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:37.564 09:24:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:37.564 09:24:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 97873256 == \9\7\8\7\3\2\5\6 ]] 00:33:37.564 09:24:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 97873256 00:33:37.564 09:24:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:37.564 09:24:14 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.822 Running I/O for 1 seconds... 00:33:38.755 11389.00 IOPS, 44.49 MiB/s 00:33:38.755 Latency(us) 00:33:38.755 [2024-11-20T08:24:15.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.755 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:38.755 nvme0n1 : 1.01 11394.06 44.51 0.00 0.00 11165.37 9272.13 21165.70 00:33:38.755 [2024-11-20T08:24:15.784Z] =================================================================================================================== 00:33:38.755 [2024-11-20T08:24:15.784Z] Total : 11394.06 44.51 0.00 0.00 11165.37 9272.13 21165.70 00:33:38.755 { 00:33:38.755 "results": [ 00:33:38.755 { 00:33:38.755 "job": "nvme0n1", 00:33:38.755 "core_mask": "0x2", 00:33:38.755 "workload": "randread", 00:33:38.755 "status": "finished", 00:33:38.755 "queue_depth": 128, 00:33:38.755 "io_size": 4096, 00:33:38.755 "runtime": 1.010878, 00:33:38.755 "iops": 11394.055464655477, 00:33:38.755 "mibps": 44.50802915881046, 00:33:38.755 "io_failed": 0, 00:33:38.755 "io_timeout": 0, 00:33:38.755 "avg_latency_us": 11165.367304251638, 00:33:38.755 "min_latency_us": 9272.13037037037, 00:33:38.755 "max_latency_us": 21165.70074074074 00:33:38.755 } 00:33:38.755 ], 00:33:38.755 "core_count": 1 00:33:38.755 } 00:33:38.755 09:24:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:38.755 09:24:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:39.013 09:24:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:39.013 09:24:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:39.013 09:24:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:39.013 09:24:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:39.013 09:24:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:39.013 09:24:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:39.271 09:24:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:39.271 09:24:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:39.271 09:24:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:39.271 09:24:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:39.271 09:24:16 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:33:39.271 09:24:16 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:39.271 09:24:16 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:39.271 09:24:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:39.271 09:24:16 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:39.271 09:24:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:39.271 09:24:16 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:39.271 09:24:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:39.529 [2024-11-20 09:24:16.449873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:39.529 [2024-11-20 09:24:16.449902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf62c0 (107): Transport endpoint is not connected 00:33:39.529 [2024-11-20 09:24:16.450897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf62c0 (9): Bad file descriptor 00:33:39.529 [2024-11-20 09:24:16.451897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:33:39.529 [2024-11-20 09:24:16.451915] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:39.529 [2024-11-20 09:24:16.451941] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:39.529 [2024-11-20 09:24:16.451956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:33:39.529 request: 00:33:39.529 { 00:33:39.529 "name": "nvme0", 00:33:39.529 "trtype": "tcp", 00:33:39.529 "traddr": "127.0.0.1", 00:33:39.529 "adrfam": "ipv4", 00:33:39.529 "trsvcid": "4420", 00:33:39.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.529 "prchk_reftag": false, 00:33:39.529 "prchk_guard": false, 00:33:39.529 "hdgst": false, 00:33:39.529 "ddgst": false, 00:33:39.529 "psk": ":spdk-test:key1", 00:33:39.529 "allow_unrecognized_csi": false, 00:33:39.529 "method": "bdev_nvme_attach_controller", 00:33:39.529 "req_id": 1 00:33:39.529 } 00:33:39.529 Got JSON-RPC error response 00:33:39.529 response: 00:33:39.529 { 00:33:39.529 "code": -5, 00:33:39.529 "message": "Input/output error" 00:33:39.529 } 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@33 -- # sn=97873256 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 97873256 00:33:39.529 1 links removed 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@33 -- # sn=914142707 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 914142707 00:33:39.529 1 links removed 00:33:39.529 09:24:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3522840 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3522840 ']' 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3522840 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3522840 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3522840' 00:33:39.529 killing process with pid 3522840 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@971 -- # kill 3522840 00:33:39.529 Received shutdown signal, test time was about 1.000000 seconds 00:33:39.529 00:33:39.529 Latency(us) 00:33:39.529 [2024-11-20T08:24:16.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.529 [2024-11-20T08:24:16.558Z] =================================================================================================================== 00:33:39.529 [2024-11-20T08:24:16.558Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.529 09:24:16 keyring_linux -- common/autotest_common.sh@976 -- # wait 3522840 00:33:39.787 09:24:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3522774 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3522774 ']' 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3522774 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3522774 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3522774' 00:33:39.787 killing process with pid 3522774 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@971 -- # kill 3522774 00:33:39.787 09:24:16 keyring_linux -- common/autotest_common.sh@976 -- # wait 3522774 00:33:40.351 00:33:40.351 real 0m5.271s 00:33:40.351 user 0m10.439s 00:33:40.351 sys 0m1.604s 00:33:40.351 09:24:17 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:40.352 09:24:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:40.352 ************************************ 00:33:40.352 END TEST keyring_linux 00:33:40.352 ************************************ 00:33:40.352 09:24:17 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:40.352 09:24:17 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:40.352 09:24:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:40.352 09:24:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:40.352 09:24:17 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:40.352 09:24:17 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:40.352 09:24:17 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:40.352 09:24:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:40.352 09:24:17 -- common/autotest_common.sh@10 -- # set +x 00:33:40.352 09:24:17 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:40.352 09:24:17 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:33:40.352 09:24:17 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:33:40.352 09:24:17 -- common/autotest_common.sh@10 -- # set +x 00:33:42.252 INFO: APP EXITING 00:33:42.252 INFO: killing all VMs 00:33:42.252 INFO: killing vhost app 00:33:42.252 INFO: EXIT DONE 00:33:43.660 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:43.660 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:43.660 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:43.660 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:43.660 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:43.660 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:43.660 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:43.660 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:43.660 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:33:43.660 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:43.660 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:43.660 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:43.661 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:43.661 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:43.661 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:43.661 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:43.661 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:45.079 Cleaning 00:33:45.079 Removing: /var/run/dpdk/spdk0/config 00:33:45.079 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:45.079 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:45.079 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:45.079 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:45.079 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:45.079 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:45.079 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:45.079 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:45.079 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:45.079 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:45.079 Removing: /var/run/dpdk/spdk1/config 00:33:45.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:45.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:45.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:45.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:45.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:45.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:45.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:45.079 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:45.079 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:45.079 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:45.079 Removing: /var/run/dpdk/spdk2/config 00:33:45.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:45.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:45.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:45.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:45.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:45.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:45.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:45.079 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:45.079 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:45.079 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:45.079 Removing: /var/run/dpdk/spdk3/config 00:33:45.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:45.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:45.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:45.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:45.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:45.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:45.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:45.079 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:45.079 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:45.079 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:45.079 Removing: /var/run/dpdk/spdk4/config 00:33:45.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:45.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:45.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:45.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:45.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:45.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:45.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:45.079 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:45.079 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:45.079 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:45.079 Removing: /dev/shm/bdev_svc_trace.1 00:33:45.079 Removing: /dev/shm/nvmf_trace.0 00:33:45.079 Removing: /dev/shm/spdk_tgt_trace.pid3199834 00:33:45.079 Removing: /var/run/dpdk/spdk0 00:33:45.079 Removing: /var/run/dpdk/spdk1 00:33:45.079 Removing: /var/run/dpdk/spdk2 00:33:45.079 Removing: /var/run/dpdk/spdk3 00:33:45.079 Removing: /var/run/dpdk/spdk4 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3198160 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3198896 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3199834 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3200173 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3200856 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3200996 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3201720 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3201845 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3202105 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3203308 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3204231 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3204548 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3204751 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3204977 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3205272 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3205437 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3205592 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3205783 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3206094 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3208584 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3208748 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3208908 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3208917 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3209346 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3209355 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3209786 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3209789 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3210028 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3210091 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3210255 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3210289 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3210768 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3210924 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3211126 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3213359 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3215997 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3223620 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3224031 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3226553 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3226825 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3229361 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3233207 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3235320 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3241816 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3247061 00:33:45.079 Removing: /var/run/dpdk/spdk_pid3248300 00:33:45.080 Removing: /var/run/dpdk/spdk_pid3249048 00:33:45.080 Removing: /var/run/dpdk/spdk_pid3259952 00:33:45.080 Removing: /var/run/dpdk/spdk_pid3262363 00:33:45.080 Removing: /var/run/dpdk/spdk_pid3290312 00:33:45.080 Removing: /var/run/dpdk/spdk_pid3293498 00:33:45.080 Removing: /var/run/dpdk/spdk_pid3298065 00:33:45.080 Removing: /var/run/dpdk/spdk_pid3302344 00:33:45.080 Removing: /var/run/dpdk/spdk_pid3302346 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3303009 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3303663 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3304202 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3304601 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3304724 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3304863 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3304999 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3305004 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3305660 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3306266 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3306856 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3307262 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3307380 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3307527 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3308539 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3309272 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3314499 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3342821 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3345714 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3346925 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3348746 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3348890 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3349030 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3349173 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3349726 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3350938 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3351806 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3352235 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3353817 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3354155 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3354713 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3357107 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3360395 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3360396 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3360397 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3362615 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3367464 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3370111 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3374016 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3374958 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3376054 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3377020 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3379915 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3382985 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3385357 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3389592 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3389598 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3392496 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3392630 00:33:45.339 Removing: /var/run/dpdk/spdk_pid3392769 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3393036 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3393167 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3395927 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3396266 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3398941 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3400917 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3404356 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3407680 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3414182 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3419092 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3419134 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3431789 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3432322 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3432732 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3433160 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3433841 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3434249 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3434654 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3435064 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3437573 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3437726 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3441579 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3441690 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3445057 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3447667 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3455210 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3455613 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3458120 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3458270 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3460904 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3464601 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3466763 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3473174 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3478376 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3479576 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3480243 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3491105 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3493388 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3495400 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3500335 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3500465 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3503365 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3504764 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3506160 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3507020 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3508479 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3509317 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3514743 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3515107 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3515495 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3517048 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3517405 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3517734 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3520192 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3520196 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3522407 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3522774 00:33:45.340 Removing: /var/run/dpdk/spdk_pid3522840 00:33:45.340 Clean 00:33:45.598 09:24:22 -- common/autotest_common.sh@1451 -- # return 0 00:33:45.598 09:24:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:45.598 09:24:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:45.598 09:24:22 -- common/autotest_common.sh@10 -- # set +x 00:33:45.598 09:24:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:45.598 09:24:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:45.598 09:24:22 -- common/autotest_common.sh@10 -- # set +x 00:33:45.598 09:24:22 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:45.598 09:24:22 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:45.598 09:24:22 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:45.598 09:24:22 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:45.598 09:24:22 -- spdk/autotest.sh@394 -- # hostname 00:33:45.598 09:24:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:45.857 geninfo: WARNING: invalid characters removed from testname! 00:34:17.937 09:24:53 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:20.475 09:24:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:23.768 09:25:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:26.307 09:25:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:29.599 09:25:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:32.891 09:25:09 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:35.429 09:25:12 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:35.429 09:25:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:35.429 09:25:12 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:35.429 09:25:12 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:35.429 09:25:12 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:35.429 09:25:12 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:35.429 + [[ -n 3127571 ]] 00:34:35.429 + sudo kill 3127571 00:34:35.438 [Pipeline] } 00:34:35.453 [Pipeline] // stage 00:34:35.458 [Pipeline] } 00:34:35.471 [Pipeline] // timeout 00:34:35.476 [Pipeline] } 00:34:35.489 [Pipeline] // catchError 00:34:35.493 [Pipeline] } 00:34:35.505 [Pipeline] // wrap 00:34:35.510 [Pipeline] } 00:34:35.523 [Pipeline] // catchError 00:34:35.531 [Pipeline] stage 00:34:35.532 [Pipeline] { (Epilogue) 00:34:35.541 [Pipeline] catchError 00:34:35.542 [Pipeline] { 00:34:35.551 [Pipeline] echo 00:34:35.552 Cleanup processes 00:34:35.556 [Pipeline] sh 00:34:35.841 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:35.841 3533481 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:35.854 [Pipeline] sh 00:34:36.138 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:36.139 ++ grep -v 'sudo pgrep' 00:34:36.139 ++ awk '{print $1}' 00:34:36.139 + sudo kill -9 00:34:36.139 + true 00:34:36.148 [Pipeline] sh 00:34:36.427 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:46.430 [Pipeline] sh 00:34:46.715 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:46.715 Artifacts sizes are good 00:34:46.729 [Pipeline] archiveArtifacts 00:34:46.736 Archiving artifacts 00:34:46.916 [Pipeline] sh 00:34:47.248 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:47.263 [Pipeline] cleanWs 00:34:47.275 [WS-CLEANUP] Deleting project workspace... 00:34:47.275 [WS-CLEANUP] Deferred wipeout is used... 00:34:47.283 [WS-CLEANUP] done 00:34:47.284 [Pipeline] } 00:34:47.301 [Pipeline] // catchError 00:34:47.314 [Pipeline] sh 00:34:47.599 + logger -p user.info -t JENKINS-CI 00:34:47.607 [Pipeline] } 00:34:47.620 [Pipeline] // stage 00:34:47.625 [Pipeline] } 00:34:47.639 [Pipeline] // node 00:34:47.645 [Pipeline] End of Pipeline 00:34:47.678 Finished: SUCCESS